Back to Insights
AI Governance & Risk ManagementGuide

AI Liability: Who's Responsible When AI Fails?

October 11, 202514 min readMichael Lansdowne Hauge
For:Legal/ComplianceConsultantCTO/CIOIT ManagerCISOBoard Member

Navigate the complex legal landscape of AI liability. Understand product liability, professional negligence, algorithmic accountability, and emerging AI-specific liability frameworks across jurisdictions.

Summarize and fact-check this article with:
Japanese Executive - ai governance & risk management insights

Key Takeaways

  • 1.AI liability is distributed across developers, deployers, data providers, and human decision-makers; responsibility turns on role, contracts, and applicable law.
  • 2.Product liability regimes are most likely to apply when AI is embedded in physical products, while AI-as-a-service is primarily governed by contract and negligence law.
  • 3.Professionals and fiduciaries remain fully responsible for AI-assisted decisions and must understand, supervise, and, where appropriate, override AI outputs.
  • 4.Algorithmic discrimination can trigger civil rights liability under Title VII, the Fair Housing Act, ECOA, and analogous laws, even when biased outcomes stem from third-party tools.
  • 5.The EU AI Liability Directive and proposed strict liability regime materially increase exposure for high-risk AI deployers compared with the more fragmented, fault-based U.S. approach.
  • 6.Liability caps and disclaimers have hard limits, particularly for personal injury and consumer harms, making robust testing, documentation, and oversight essential.
  • 7.Dedicated insurance—spanning product liability, E&O, cyber, EPLI, and D&O—is becoming a critical component of AI risk transfer strategies.

When an Uber self-driving vehicle struck and killed a pedestrian in Tempe, Arizona in 2018, the legal system confronted a question it was never designed to answer: who is responsible when an algorithm causes harm? The backup safety driver was criminally charged and convicted of negligent homicide in 2023. Uber settled the civil claims. But no court ever determined whether the AI system itself was defective, or who among the vehicle manufacturer (Volvo), the sensor suppliers, and the software developers should bear ultimate liability. That unanswered question sits at the center of a rapidly widening gap between the harms AI systems can cause and the legal frameworks available to address them.

The AI Liability Gap

Why Traditional Liability Frameworks Fall Short

The foundational challenge is that AI systems defy the assumptions underlying centuries of tort law. Modern AI models operate with billions of parameters, making it extraordinarily difficult to trace a harmful outcome back to a specific cause. Their decision-making logic remains opaque, even to the engineers who built them, which means plaintiffs face a near-impossible burden when attempting to prove defect or negligence through conventional means.

Responsibility is further fragmented across an entire value chain of actors. Data providers, model developers, system integrators, deployers, and end users all play a role in how an AI system performs, yet existing frameworks offer no clear method for apportioning liability among them. Perhaps most fundamentally, self-learning systems may evolve in ways their creators never anticipated, breaking the causal chains that negligence and product liability law require.

Each of the traditional legal theories carries its own limitations when applied to AI. Product liability doctrine was designed for physical goods and fits awkwardly with software-as-a-service or algorithmic outputs. Professional negligence law presumes a human expert exercising judgment, not a statistical model generating predictions. Contract law may offer some recourse, but terms of service routinely disclaim liability, and enforceability varies dramatically by jurisdiction and harm type. Even criminal law struggles, given its requirement of mens rea, a form of intent that is difficult to attribute to an algorithm or, in many cases, to establish against the corporate actors behind it.

High-Profile AI Liability Cases

The Uber fatality is far from the only case exposing these fault lines. The National Highway Traffic Safety Administration has opened investigations into more than 35 crashes involving Tesla's Autopilot and Full Self-Driving systems between 2016 and 2024. Civil suits have alleged deceptive marketing, arguing that the "Autopilot" branding implies full automation when the system is classified as Level 2 driver assistance. Product liability claims have centered on design defects, including over-reliance on camera-based perception and inadequate driver monitoring systems. Most of these cases have settled without definitive liability rulings, leaving the legal landscape largely undefined.

Algorithmic discrimination has generated its own set of landmark disputes. In 2018, Amazon disclosed that an internal AI recruiting tool had learned to penalize resumes that mentioned "women's" in any context, from chess clubs to colleges. Although Amazon discontinued the tool before deployment and no lawsuit was filed, the episode illustrated how AI systems trained on historical data can systematically reproduce the biases embedded in that data. A year later, the Apple Card drew scrutiny when Goldman Sachs' credit algorithm allegedly offered women lower credit limits than men with identical financial profiles. The New York Department of Financial Services investigated, and Goldman Sachs settled with an agreement to improve its fairness testing processes. Both cases would have raised claims under Title VII (for employment discrimination) and the Equal Credit Opportunity Act (for credit discrimination), respectively, had they proceeded to litigation.

It is worth drawing a distinction between liability and accountability in this context. Liability is legal responsibility, enforceable through lawsuits and regulatory fines. Accountability is broader: it encompasses the ethical, social, and organizational responsibility that may not rise to the level of a legal claim but nonetheless shapes reputation, trust, and long-term business viability.

Product Liability Framework

When AI Qualifies as a "Product"

Under the Restatement (Third) of Torts: Products Liability, manufacturers face strict liability for defective products that cause harm, regardless of fault. The framework recognizes three categories of defect. A manufacturing defect occurs when the product deviates from its intended design, which in the AI context might involve data corruption or incorrect model weights. A design defect exists when the product's inherent design makes it unreasonably dangerous, as when an AI system is trained on biased data or deployed without adequate safety constraints. A warning defect arises from inadequate instructions or disclosures about the product's risks, including its limitations and known failure modes.

Whether a given AI system qualifies as a "product" under this framework depends on how it is delivered. AI embedded in physical goods, such as autonomous vehicles, medical devices, and industrial robots, clearly falls within the scope of product liability. AI sold as packaged software under a one-time license arguably qualifies as well. However, AI delivered as a service through cloud APIs or SaaS platforms is generally governed by contract law rather than product liability, a distinction that leaves a significant category of AI systems outside the strict liability regime.

Design Defect Claims Against AI

Design defect claims against AI systems are evaluated under the risk-utility test, which asks whether the risks of a product's design outweigh its utility in light of the magnitude and likelihood of foreseeable harms, the availability and cost of safer alternative designs, and the user's ability to avoid harm.

AI-specific design defects that may give rise to claims include training on unrepresentative or biased datasets, deploying without adequate validation on edge cases, omitting guardrails that would prevent dangerous outputs, permitting fully autonomous decision-making in high-stakes contexts, and failing to implement safe failure modes. The plaintiff in a design defect case must demonstrate that a reasonable alternative design was feasible and would have prevented the harm. For an AI hiring tool that produces disparate impact, for instance, the reasonable alternative might be a fairness-constrained model with equal selection rates across demographic groups. In practice, however, the technical complexity of AI systems makes it exceptionally difficult for plaintiffs and juries to evaluate whether a proposed alternative design would have been effective.

Warning Defect Claims

Manufacturers have a duty to warn about risks that are not obvious to ordinary users, and those warnings must be clear, conspicuous, and comprehensible. Applied to AI, this duty encompasses several specific obligations: disclosing what the system can and cannot do, describing known scenarios in which it performs poorly (such as low lighting conditions for vision systems, unusual accents for speech recognition, or rare conditions for medical diagnostics), warning if the system exhibits disparate performance across demographic groups, and communicating uncertainty through confidence intervals rather than presenting point predictions as certainties.

In the medical context, the learned intermediary doctrine may modify these obligations. For prescription medical devices, including AI-based diagnostic tools, the manufacturer's duty to warn runs to the physician rather than the patient directly. If the manufacturer provides adequate warnings to physicians, it may be shielded from direct patient liability, with the physician serving as the intermediary responsible for informing patients of the relevant risks.

Fewer than 50 product liability cases involving AI had reached trial in the United States as of 2024, according to available court records. The vast majority have settled, meaning that juries have not yet established clear precedents on design defect standards for AI systems.

Professional Negligence and Standard of Care

When AI Acts as "Professional"

Professional negligence law requires a plaintiff to establish four elements: a duty of care arising from a professional-client relationship, a breach of that duty measured against the applicable standard of care, a causal connection between the breach and the harm, and actual damages. The standard of care demands that professionals exercise the skill and knowledge consistent with reasonably competent practitioners in their field.

An emerging and unresolved question is how to apply this framework when AI systems perform functions traditionally reserved for licensed professionals. Medical diagnostic AI, legal research AI, financial advisory AI, and accounting AI all operate in domains where human practitioners owe well-established duties of care. Three competing approaches have been proposed for defining the standard of care in these contexts. A human-equivalent standard would require AI to perform at least as well as a competent human professional. An AI-specific standard would judge AI systems against other AI systems, though this approach risks normalizing industry-wide flaws. A hybrid standard would evaluate the performance of the AI-human team as an integrated unit.

Malpractice Liability for AI-Assisted Professionals

The interaction between human judgment and algorithmic recommendation creates novel liability questions. Consider a physician who relies on an AI diagnostic tool. If the physician follows the AI's recommendation and the AI is wrong, the physician may argue that reliance on an FDA-cleared system was reasonable. The plaintiff may counter that the physician should have overridden an obviously incorrect recommendation. The outcome will likely turn on how apparent the error was and whether the physician exercised independent clinical judgment.

The reverse scenario is equally instructive. If a physician overrides the AI and the AI's diagnosis turns out to have been correct, the plaintiff may argue that ignoring a correct AI diagnosis constitutes negligence. Courts, however, are likely to find that exercising independent professional judgment is not negligent in itself, unless the decision to override was clearly unreasonable given the available information.

Both scenarios point to the growing concern over automation bias, the well-documented tendency to defer to algorithmic recommendations without sufficient critical evaluation. Courts may increasingly find professionals negligent for blindly trusting AI, establishing a standard of care that requires understanding AI limitations and independently verifying its recommendations before acting on them.

Fiduciary Duty and AI

Attorneys, investment advisers, trustees, and corporate directors all owe fiduciary duties to their clients or beneficiaries, the highest standard of obligation recognized in law. These duties include the duty of loyalty (to act in the client's best interest) and the duty of care (to exercise reasonable diligence). The critical principle governing AI in fiduciary contexts is that fiduciary duty is non-delegable. Robo-advisors and algorithmic investment platforms do not absolve the investment adviser of responsibility for the advice provided. Attorneys who rely on AI research tools remain fully liable for errors in their work product. Corporate directors who use AI to inform strategic decisions must still exercise the independent oversight their role demands. Deploying AI may enhance the quality of fiduciary services, but it cannot transfer the legal responsibility that accompanies them.

Courts are likely to hold that reasonable professionals in fiduciary roles must understand the capabilities and limitations of the AI tools they use, independently verify AI outputs in high-stakes decisions, and maintain sufficient competence to detect when an AI system has produced an erroneous result.

Algorithmic Discrimination and Civil Rights Liability

Disparate Impact Framework

Several federal civil rights statutes create liability for algorithmic discrimination. Title VII prohibits employment discrimination on the basis of race, color, religion, sex, and national origin. The Fair Housing Act extends protections in housing and credit to additional categories including familial status and disability. The Equal Credit Opportunity Act covers credit decisions and adds marital status, age, and public assistance receipt to the list of protected characteristics. The Americans with Disabilities Act addresses discrimination in employment and public accommodations.

Under the disparate impact theory, a facially neutral practice that disproportionately harms a protected group gives rise to liability even without discriminatory intent. The plaintiff must demonstrate a statistical disparity, at which point the defendant must prove the practice serves a legitimate business interest and is necessary. The plaintiff can then prevail by showing that a less discriminatory alternative exists.

AI systems are particularly susceptible to disparate impact claims for three reasons. Training on historical data perpetuates the patterns of past discrimination embedded in that data. Proxy variables such as zip code, educational institution, and name often correlate closely with protected characteristics even when those characteristics are not used directly. And optimization for predictive accuracy alone, without fairness constraints, can produce models that systematically disadvantage certain groups. Critically, employers who purchase AI hiring tools from third-party vendors remain liable for the discriminatory outcomes those tools produce. "The algorithm did it" is not a recognized defense. Employers must conduct adverse impact analysis and validate the job-relatedness of the criteria their AI tools apply.

Recent Enforcement Actions

Enforcement activity has accelerated in recent years. In 2019, the Department of Housing and Urban Development brought a complaint against Facebook, alleging that its advertising platform allowed advertisers to exclude users by race, religion, and national origin when targeting housing ads, in violation of the Fair Housing Act. The case settled in 2022, with Facebook paying a $115,000 penalty, agreeing to modify its algorithms, and submitting to an independent auditor.

The Equal Employment Opportunity Commission issued guidance in 2023 clarifying that employers bear liability for discriminatory AI hiring tools. The guidance specified that validation must demonstrate that tools predict actual job performance rather than merely correlating with race or sex, and that regular adverse impact testing is required. Separately, the New York Department of Financial Services has been investigating algorithmic bias in credit and insurance underwriting, with a particular focus on proxy discrimination, the practice of using ostensibly non-protected variables that in practice correlate tightly with protected characteristics.

EU AI Liability Directive (2024)

Fault-Based Liability Regime (Chapter 2)

The European Union's AI Liability Directive, adopted in 2024, represents the most comprehensive legislative response to AI liability to date. Under its fault-based regime, a plaintiff who demonstrates that an AI system caused harm, that the defendant failed to comply with obligations under the EU AI Act, and that the non-compliance was causally linked to the harm benefits from a presumption of fault. The burden then shifts to the defendant to prove it was not at fault.

The Directive also establishes significant disclosure obligations. Courts can order defendants to produce relevant evidence about their AI systems when a plaintiff presents a plausible claim, and failure to comply can result in a presumption of non-compliance. This reduced burden of proof is a deliberate response to the opacity problem: plaintiffs need not prove the exact technical defect within an AI system, because demonstrating AI-caused harm together with regulatory non-compliance is sufficient to shift the burden to the defendant.

Strict Liability for High-Risk AI (Proposed Directive)

For AI systems classified as high-risk under the AI Act, including those used in employment, credit, law enforcement, and critical infrastructure, the proposed directive goes further still, imposing strict liability. Under this regime, a victim need only show that a high-risk AI system caused harm through property damage, personal injury, or economic loss, and that a causal link exists between the AI and the harm. No proof of defect or negligence is required.

The proposed damages framework reflects the seriousness of this approach. Liability for personal injury and death is unlimited. Property damage is subject to a cap of 2 million euros. Economic loss not consequential to physical harm is capped at 1 million euros. The only available defenses are force majeure and the victim's own intentional misconduct or gross negligence.

Allocation of Liability

Primary liability under the Directive falls on the deployer, defined as the entity using AI in a commercial activity. However, the AI developer or manufacturer bears liability if the deployer is insolvent or cannot be identified, or if the harm resulted from the supplier's own failure to comply with AI Act obligations. Where multiple parties are responsible, the Directive provides for joint and several liability, allowing the plaintiff to recover the full amount from any liable party.

This strict liability regime represents a fundamental departure from the fault-based approach that characterizes U.S. law. Deployers of high-risk AI in the EU face liability simply for causing harm, without any requirement that the plaintiff prove defect or negligence.

U.S. Liability Landscape

Federal Approach (Sector-Specific)

The United States has not enacted a comprehensive federal AI liability statute. Instead, the federal approach relies on existing laws applied, sometimes awkwardly, to AI contexts. The regulatory landscape is fragmented across sectors: the National Highway Traffic Safety Administration oversees autonomous vehicles, the Food and Drug Administration regulates AI-based medical devices, the Securities and Exchange Commission and the Office of the Comptroller of the Currency govern AI in financial services, and the EEOC addresses AI in employment. The Federal Trade Commission exercises broader authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, but this catch-all provision was not designed with AI systems in mind.

State Tort Law

In the absence of federal legislation, state tort law carries much of the liability burden. General negligence principles require AI developers and deployers to exercise reasonable care toward foreseeable plaintiffs, with breach potentially established by failing to test adequately, deploying known-defective systems, or providing inadequate warnings. Under the Restatement (Third), strict liability applies to defective products, though states vary on whether software and AI qualify as "products." The majority trend treats software as a product when sold as a packaged good but exempts services from product liability.

Misrepresentation claims offer another avenue. An AI vendor that intentionally makes false statements about its system's capabilities faces liability for fraudulent misrepresentation, while careless false statements to persons relying on the vendor's expertise give rise to claims for negligent misrepresentation. A vendor claiming "99% accuracy" without adequate testing to support that figure, for instance, may face exposure under either theory.

State legislatures are beginning to respond with AI-specific proposals. California has considered a bill that would create strict liability for high-risk AI systems. Texas is developing a liability framework for autonomous vehicles. New York City's AI auditing law has created a private right of action for algorithmic discrimination, allowing affected individuals to bring suit directly.

Contract Law and Liability Disclaimers

AI vendor contracts routinely include limitation of liability clauses that cap damages at the fees paid or disclaim consequential damages entirely. The enforceability of these provisions depends heavily on context. In consumer contracts, courts frequently strike down such clauses under the unconscionability doctrine or consumer protection statutes. In negotiated business-to-business agreements, limitation clauses are generally enforceable unless they contravene public policy. Across nearly all jurisdictions, however, parties cannot disclaim liability for physical harm.

Indemnification provisions allocate risk between vendors and customers. Vendors may agree to indemnify customers for third-party claims arising from AI defects, while customers may be required to indemnify vendors for harms caused by misuse. The central question in any AI deployment relationship is who bears the risk of AI-caused harms to third parties, and the answer is almost always determined by the specifics of the contract rather than by default legal rules.

For consumer-facing AI, click-wrap and browse-wrap agreements are the norm. These agreements typically disclaim all warranties and limit liability, but courts subject them to scrutiny on questions of conspicuousness and whether the user had a meaningful choice in accepting the terms.

Practical Risk Management Strategies

For AI Developers and Vendors

The strongest liability defense available to any AI developer is a well-documented record of reasonable care. That begins with rigorous testing and validation: training on diverse, representative datasets that include demographic subgroups and edge cases, conducting adversarial red-team exercises and penetration testing, and performing systematic failure mode analysis. Every stage of this process should be documented, as that documentation becomes the evidentiary foundation of a negligence defense.

Transparency obligations extend beyond marketing materials to detailed technical disclosures. Model cards, the emerging standard for documenting training data sources and performance metrics, should be published for every production system. Warnings about known failure modes and demographic performance disparities must be clear, conspicuous, and actionable. End users should receive training on how to use the AI safely and when to override its recommendations.

After deployment, ongoing monitoring demonstrates the continued diligence that courts expect. This includes real-world performance tracking, incident reporting systems that collect and analyze failure cases, and rapid-response capabilities that allow critical vulnerabilities to be patched and safety notices to be issued promptly.

Contractual risk allocation is an essential complement to technical measures. In business-to-business relationships, limitation of liability clauses, customer indemnification for misuse, insurance requirements, and arbitration provisions all serve to manage exposure. From an insurance perspective, AI developers should evaluate coverage across multiple policy types: general liability (for bodily injury and property damage), professional liability and errors-and-omissions (for negligent advice or services), cyber liability (for data breaches and network security failures), and the emerging category of AI-specific product liability policies.

For AI Deployers and Users

Organizations deploying AI systems should begin with thorough vendor due diligence, requesting validation studies, fairness audits, and security assessments before procurement. Reviewing the vendor's testing methodology and known limitations is essential, as is confirming regulatory compliance such as FDA clearance, CE marking, or bias audit results where applicable. Contract negotiations should secure favorable indemnification terms, appropriate liability caps, and the right to audit the vendor's compliance claims.

Human-in-the-loop processes are both a best practice and a liability shield. Using AI to assist rather than replace human decision-makers, training personnel on AI limitations, and establishing clear escalation procedures for when to seek human review all demonstrate the reasonable care that courts evaluate in negligence claims. These processes should be documented systematically.

Ongoing oversight after deployment is equally important. Monitoring AI performance for accuracy, bias, and failure rates, conducting periodic revalidation to ensure performance has not degraded, investigating incidents through root cause analysis, and implementing corrective actions all form part of the standard of care that deployers should expect to be measured against. Insurance coverage should include employment practices liability (for discrimination claims arising from AI hiring tools), directors and officers coverage (for board-level decisions to deploy AI), and cyber insurance (for AI-related data breaches).

For Individuals Harmed by AI

Individuals who have suffered harm from an AI system face a challenging but navigable path to recovery. The first step is thorough documentation: medical records for injuries, employment records for discrimination, financial records for economic loss, copies of AI system outputs and recommendations, and all communications with the AI deployer including complaints and responses.

Identifying the responsible parties requires looking across the entire AI value chain, from the developer or manufacturer of the AI system, to the deployer that put it into use, to the data providers whose training data may have been defective, to the human decision-makers who may have failed in their oversight responsibilities.

Establishing causation, often the most difficult element, typically requires expert technical testimony and aggressive discovery to obtain AI system documentation, training data, and validation studies. The available legal theories are broad: product liability (strict liability for defects if the AI was sold as a product), negligence (if the developer or deployer failed to exercise reasonable care), discrimination (if the AI produced disparate impact in violation of Title VII, the Fair Housing Act, or the ECOA), breach of contract (if the AI failed to perform as warranted), and misrepresentation (if the vendor made false claims about the system's capabilities). Remedies may include compensatory damages for medical expenses, lost wages, and pain and suffering; punitive damages in cases of reckless or malicious conduct; injunctive relief requiring changes to or discontinuation of the AI system; and attorney's fees in discrimination cases.

Emerging Issues

The rapid proliferation of generative AI has opened a new front in AI liability law. Large language models and image generators trained on copyrighted works without permission face claims that both their training process and their outputs constitute infringement. Several landmark cases filed in 2023 and 2024 are testing these theories. The Authors Guild brought suit against OpenAI for training GPT models on copyrighted books without licensing them. Getty Images sued Stability AI for training the Stable Diffusion image generator on its copyrighted photo library. The New York Times sued both OpenAI and Microsoft for training on its news articles.

These cases proceed on multiple legal theories: direct infringement (the AI reproduced copyrighted works without authorization), contributory infringement (the AI enables users to generate infringing content), and vicarious infringement (the AI provider profits from its users' infringing activity). Defendants have raised fair use as a primary defense, arguing that training on copyrighted works constitutes transformative use. They have also argued that AI outputs do not copy protectable expression from the training data. These arguments remain untested at the appellate level, and the outcomes will shape the legal foundations of the generative AI industry for years to come.

Open-Source AI Liability

Open-source AI presents a distinct liability challenge. When a freely distributed model causes harm, there is no centralized vendor to sue. Developers are often anonymous or judgment-proof. Permissive licenses such as MIT and Apache explicitly disclaim all warranties and liability. Negligence claims against open-source contributors face a high bar, as courts have historically been reluctant to impose a duty of care on volunteer developers. Product liability is unlikely to apply because open-source models are typically free and not "sold" in the commercial sense. The most viable path for plaintiffs is likely to target the entity that deployed the open-source AI in a commercial context, rather than the original developers, on the theory that commercial deployment creates the duty of care that open-source contribution does not.

Autonomous Weapons and Military AI

The use of AI in military contexts raises liability questions that extend beyond domestic tort law into international humanitarian law. The principles of distinction (discriminating between combatants and civilians), proportionality (ensuring that attacks do not cause excessive civilian harm relative to military advantage), and precaution (taking feasible steps to minimize civilian harm) all apply to autonomous weapons systems, though how they apply remains deeply contested.

Proponents argue that AI-guided systems can achieve greater precision than human combatants, potentially reducing civilian casualties. Critics counter that AI lacks the contextual moral judgment that complex battlefield decisions require, and that autonomous weapons create an accountability gap in which no individual bears meaningful responsibility for unlawful harm. Under existing international law, military commanders bear responsibility for war crimes committed by their subordinates, a doctrine that may extend to harms caused by AI systems under their command. States bear responsibility for violations of international humanitarian law. And individual criminal liability at the International Criminal Court could potentially reach the designers and operators of autonomous weapons that commit war crimes.

Key Takeaways

AI liability is distributed, not concentrated. Developers, deployers, data providers, and human decision-makers all occupy positions in the liability chain, and determining which party bears primary responsibility depends on the nature of the harm, the contractual allocation of risk, and the applicable legal framework.

Product liability doctrine applies most clearly to AI embedded in physical products such as autonomous vehicles, medical devices, and robots, where strict liability for design, manufacturing, and warning defects is well established. AI delivered as a service generally falls outside the strict liability regime.

Professionals cannot delegate their liability to an algorithm. Physicians, attorneys, investment advisers, and other fiduciaries remain fully responsible for the decisions they make with AI assistance. Fiduciary duty is, by its nature, non-delegable.

Algorithmic discrimination creates civil rights liability regardless of intent. AI systems that produce disparate impact violate Title VII, the Fair Housing Act, and the ECOA. Employers and deployers bear this liability even when the AI was purchased from a third-party vendor.

The EU AI Liability Directive has established a strict liability regime for high-risk AI that represents a paradigm shift. Deployers face liability without any requirement to prove defect or negligence. The United States, by contrast, continues to rely on a patchwork of sector-specific regulations and state tort law, with no comprehensive federal AI liability statute on the horizon.

Contractual disclaimers provide meaningful but limited protection. They cannot eliminate liability for personal injury, and consumer-facing disclaimers are frequently held unenforceable. Even in business-to-business contexts, limitation clauses face increasing judicial scrutiny.

Insurance is no longer optional for any organization developing or deploying AI. Traditional general liability and errors-and-omissions policies may not cover AI-specific risks. Organizations should evaluate AI-specific product liability coverage, professional liability, employment practices liability, and cyber insurance, with limits adequate to the scale of potential exposure.

Common Questions

Liability depends on your role. Developers and vendors may face product liability, negligence, or misrepresentation claims. Deployers can be liable under vicarious liability, negligence, or discrimination laws if they fail to oversee AI or ignore risks. Individual end-users are usually not liable unless they misuse AI or act negligently in a professional capacity. Courts will ask whether you exercised reasonable care given your role and expertise.

No. Current law only recognizes humans and legal entities like corporations as liable persons. AI systems lack legal personhood and cannot be sued or prosecuted. All civil and criminal liability attaches to the humans and organizations that design, deploy, or control the AI.

No. Open-source licenses typically disclaim warranties and liability for the contributors, not for you. If you deploy open-source AI in a product or service, you remain responsible for defects, negligence, and regulatory violations. You must test, validate, and monitor open-source AI just as you would a commercial system.

AI embedded in physical products is usually treated as a product and subject to strict product liability for design, manufacturing, and warning defects. AI delivered as a cloud service is generally governed by contract and negligence law, with vendors relying on terms of service and liability caps. As a result, plaintiffs typically have an easier path to recovery against physical AI products than against AI-as-a-service.

You cannot disclaim liability for personal injury or for gross negligence and intentional misconduct in most jurisdictions. In B2B settings, you can often limit liability for economic loss if terms are negotiated and not against public policy. In consumer contexts, aggressive disclaimers are frequently struck down as unconscionable or inconsistent with consumer protection and warranty laws.

AI developers and vendors should consider product liability, professional liability (E&O), cyber liability, general liability, and D&O coverage. Deployers should add employment practices liability for algorithmic discrimination, professional liability for AI-assisted services, cyber coverage, and D&O. Review policies for AI-specific exclusions and ensure limits are adequate for potential AI-related claims.

The EU AI Liability Directive introduces a harmonized, AI-specific framework with fault presumptions, evidence disclosure duties, and strict liability for high-risk AI with capped damages for property and pure economic loss. The U.S. relies on existing tort, contract, and sectoral regulations, with significant state-by-state variation and a stronger emphasis on fault-based liability and contractual risk allocation.

Liability vs. Accountability

Liability is formal legal responsibility enforceable through courts and regulators. Accountability is broader and includes ethical, reputational, and governance responsibilities that may not trigger legal sanctions but still drive stakeholder trust and regulatory scrutiny.

EU Strict Liability Paradigm Shift

Under the proposed EU strict liability regime, deployers of high-risk AI can be liable for harm without any showing of defect or negligence. For organizations used to U.S.-style fault-based standards, this dramatically raises the bar for risk controls, documentation, and insurance in EU-facing deployments.

The Reasonable Professional in an AI World

Courts are converging on a view that competent professionals must understand AI tools they use, remain able to challenge AI outputs, and document independent judgment—especially in medicine, law, and finance.

50+

Estimated number of U.S. product liability cases involving AI that have reached trial as of 2024, with most settling before verdict

Source: Synthesized from U.S. case tracking through 2024

"Fiduciary duties are non-delegable: boards, advisers, and professionals cannot shift legal responsibility to algorithms, no matter how advanced."

AI governance and fiduciary duty commentary, 2024

References

  1. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  2. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  3. OECD Principles on Artificial Intelligence. OECD (2019). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  6. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  7. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.