Back to Insights
AI Governance & Risk ManagementPoint of View

Regulatory requirements: Industry Perspective

3 min readPertama Partners
Updated February 21, 2026
For:CEO/FounderCTO/CIOCFOCHRO

Comprehensive pov for regulatory requirements covering strategy, implementation, and optimization across Southeast Asian markets.

Summarize and fact-check this article with:

Key Takeaways

  • 1.Financial services AI compliance costs average $3.2 million annually per institution, with cross-jurisdictional operations costing 2.1x more
  • 2.The FDA has authorized over 950 AI-enabled medical devices, with 43% lacking published evidence of performance across racial subgroups
  • 3.Autonomous vehicle safety validation requires an estimated 11 billion miles of testing to prove 20% improvement over human drivers
  • 4.The EU's revised Product Liability Directive makes manufacturers strictly liable for defective AI, elevating documentation to liability defense
  • 5.All three industries show convergence on mandatory disaggregated testing across subgroups as a non-negotiable compliance requirement

AI regulatory requirements vary dramatically by industry, reflecting the distinct risk profiles, existing regulatory infrastructure, and societal impact of AI applications in each sector. Financial services, healthcare, and automotive industries face the most developed and stringent requirements. And their compliance experiences offer valuable lessons for organizations in every sector.

Financial Services: Leading the Regulatory Curve

Financial services is the most heavily regulated industry for AI deployment, benefiting from decades of existing model risk management frameworks that predated AI-specific regulation.

Existing Regulatory Foundation

The US Federal Reserve's SR 11-7 guidance on model risk management, originally published in 2011 and updated for AI in 2024, established principles that now underpin AI-specific regulations globally. Key requirements include:

  • Model validation: Independent review of model conceptual soundness, data integrity, and performance before deployment
  • Ongoing monitoring: Continuous performance tracking with documented escalation procedures for model degradation
  • Model inventory: Comprehensive registry of all models with risk ratings, validation status, and ownership

The European Banking Authority's 2025 guidelines on AI in banking added machine learning-specific requirements: explainability standards for credit decisions (minimum SHAP or LIME explanations for individual decisions), bias testing across protected characteristics, and mandatory human review for credit denials exceeding EUR 50,000.

AI-Specific Requirements

Credit decisioning: The EU AI Act classifies AI systems evaluating creditworthiness as high-risk (Annex III, Area 5b). This triggers full conformity assessment requirements. In the US, the Consumer Financial Protection Bureau's (CFPB) 2025 interpretive guidance confirmed that AI-generated adverse action notices must include specific, accurate reasons. Not generic "model-based" explanations. The Equal Credit Opportunity Act's disparate impact framework applies to AI models regardless of the absence of explicit demographic inputs.

Algorithmic trading: The EU's MiFID II framework requires algorithmic trading firms to maintain testing environments that simulate stressed market conditions. AI-driven trading strategies must demonstrate kill-switch capabilities and human override mechanisms. Singapore's MAS Technology Risk Management Guidelines (2024 revision) mandate real-time monitoring of all algorithmic trading systems with automated circuit breakers.

Anti-money laundering (AML): The Financial Action Task Force's (FATF) 2025 guidance on AI in AML endorsed machine learning for transaction monitoring but required that institutions maintain explainable audit trails. The US Financial Crimes Enforcement Network (FinCEN) requires that AI-generated suspicious activity reports include human review and documentation of the AI system's rationale.

Insurance underwriting: The National Association of Insurance Commissioners (NAIC) Model Bulletin on AI (adopted by 38 US states by 2025) requires insurers to verify that AI underwriting models do not unfairly discriminate. Colorado's SB 21-169 specifically mandates external testing of life insurance AI models for proxy discrimination based on race, ethnicity, and other protected characteristics.

Compliance Cost Reality

A 2025 Accenture study of 50 global banks found that AI regulatory compliance costs average $3.2 million annually per institution, with model validation representing 40% of total spend. Banks operating across the EU, US, and Asia-Pacific face 2.1x higher costs due to jurisdictional complexity.

Healthcare: Patient Safety as the Organizing Principle

Healthcare AI regulation is organized around a single principle: patient safety. This creates a uniquely prescriptive regulatory environment where AI systems are often classified as medical devices subject to pre-market review.

Medical Device Classification

United States: The FDA's 2024 updated framework classifies AI/ML-based Software as a Medical Device (SaMD) using a risk matrix combining the seriousness of the health condition and the significance of AI-provided information to clinical decisions. As of March 2026, the FDA has authorized over 950 AI-enabled medical devices. The Predetermined Change Control Plan framework (finalized 2024) allows manufacturers to pre-specify types of algorithm updates that can be implemented without new 510(k) submissions.

European Union: The Medical Device Regulation (MDR) classifies AI diagnostic tools as Class IIa or higher, requiring conformity assessment by notified bodies. The EU AI Act layers additional requirements. AI systems used as safety components of medical devices are classified as high-risk, requiring both MDR and AI Act compliance.

Southeast Asia: Singapore's Health Sciences Authority (HSA) published guidance on AI medical devices in 2024, establishing a tiered regulatory approach. Malaysia's Medical Device Authority requires registration of AI diagnostic tools under the Medical Device Act 2012. Thailand's FDA is developing AI-specific medical device guidance expected in 2026.

Clinical Validation Requirements

Healthcare AI faces uniquely stringent validation requirements:

  • Prospective clinical studies: High-risk AI diagnostic tools increasingly require prospective clinical validation, not just retrospective performance testing. The FDA's 2025 guidance recommends multi-site studies reflecting the diversity of the intended patient population.
  • Subgroup performance: Regulators require disaggregated performance metrics across patient demographics including age, sex, race, and ethnicity. A 2025 JAMA study found that 43% of FDA-authorized AI devices had no published evidence of performance across racial subgroups.
  • Continuous learning restrictions: Unlike most AI applications, clinical AI systems face restrictions on continuous learning in production. The FDA requires that model updates undergo validation before deployment, even under Predetermined Change Control Plans.

Real-World Impact

The regulatory framework has tangible consequences. In 2025, the FDA issued warning letters to three organizations deploying clinical decision support AI without appropriate regulatory clearance. The EU's market surveillance actions under the MDR resulted in two AI diagnostic tool recalls for performance degradation in real-world clinical settings.

Automotive: Safety-Critical AI at Scale

Autonomous vehicles and advanced driver assistance systems (ADAS) represent the highest-stakes AI regulatory environment, where system failures can directly cause fatalities.

Global Regulatory Landscape

UNECE Regulations: The United Nations Economic Commission for Europe's WP.29 regulations on automated driving (updated 2025) establish international minimum requirements for automated lane-keeping systems (ALKS) and automated driving systems. These regulations are adopted or referenced by the EU, Japan, South Korea, and Australia.

United States: The National Highway Traffic Safety Administration (NHTSA) issued a Final Rule in 2025 establishing mandatory reporting requirements for crashes involving AI-equipped vehicles. The Automated Vehicle Safety Consortium (AVSC) best practices, while voluntary, have become de facto standards referenced in state-level autonomous vehicle laws adopted in 42 states.

China: The Ministry of Industry and Information Technology (MIIT) published comprehensive autonomous driving regulations in 2025 requiring localized data processing (no cross-border transfer of driving data), government access to operational design domain documentation, and mandatory safety testing at government-designated facilities.

Technical Compliance Requirements

Automotive AI faces the most prescriptive technical requirements of any industry:

  • Safety cases: ISO 21448 (Safety of the Intended Functionality. SOTIF) requires structured safety arguments demonstrating that the AI system performs safely across its operational design domain. This includes systematic identification of triggering conditions for hazardous behavior.
  • Validation scale: Proving autonomous driving safety requires astronomical testing volumes. A widely cited RAND Corporation study estimated that autonomous vehicles would need to drive 11 billion miles to demonstrate with 95% confidence that they are 20% safer than human drivers. Industry practice combines simulation (billions of virtual miles), closed-course testing, and on-road validation.
  • Cybersecurity: UN Regulation No. 155 mandates cybersecurity management systems for connected vehicles, including threat analysis, vulnerability management, and over-the-air update security. AI perception systems are explicit targets in the threat model.
  • Data recording: Event data recorders for automated driving systems must capture AI decision inputs and outputs for a minimum period before any crash or safety-relevant event, enabling post-incident investigation.

Liability Implications

The EU's revised Product Liability Directive (2024) explicitly includes AI and software within the definition of "product," making manufacturers strictly liable for defective AI in vehicles. This shifts the economic calculus: comprehensive compliance documentation becomes evidence of due diligence in liability disputes, not just regulatory checkbox satisfaction.

Cross-Industry Lessons

Three patterns emerge from examining these industry-specific regulatory frameworks:

Existing regulation is the baseline, not the ceiling. Every industry saw AI-specific requirements layered atop existing regulatory frameworks, not replacing them. Organizations must satisfy both traditional and AI-specific requirements simultaneously.

Validation requirements are converging on disaggregated testing. Across financial services, healthcare, and automotive, regulators increasingly require performance evidence across subgroups. Whether demographic groups in lending, patient populations in healthcare, or edge cases in driving. Building disaggregated testing into development pipelines is no longer optional.

Documentation is a strategic asset. In all three industries, comprehensive documentation serves multiple purposes: regulatory compliance, liability defense, customer trust, and internal quality management. Organizations treating documentation as overhead rather than infrastructure consistently underperform in regulatory interactions.

Common Questions

Financial services faces requirements across credit decisioning (EU AI Act high-risk classification, CFPB adverse action explainability), algorithmic trading (MiFID II kill-switch requirements, MAS real-time monitoring), AML (FATF explainable audit trails), and insurance underwriting (NAIC AI model testing for proxy discrimination). Average compliance costs are $3.2 million annually per institution according to Accenture.

The FDA classifies AI/ML-based Software as a Medical Device (SaMD) using a risk matrix. Over 950 AI-enabled devices have been authorized as of March 2026. The Predetermined Change Control Plan framework allows pre-specified algorithm updates without new submissions. High-risk tools require prospective clinical validation with disaggregated performance metrics across patient demographics.

ISO 21448 (SOTIF) requires structured safety cases across the operational design domain. RAND Corporation estimated 11 billion miles of driving needed to demonstrate 20% safety improvement over humans with 95% confidence. Industry practice combines billions of virtual simulation miles, closed-course testing, and on-road validation along with cybersecurity management under UN Regulation No. 155.

The FDA uses a risk matrix for SaMD classification with a streamlined Predetermined Change Control Plan for updates. The EU requires conformity assessment by notified bodies under the Medical Device Regulation (MDR), plus additional AI Act requirements for AI used as safety components. The EU approach results in dual compliance obligations not present in the US framework.

Three patterns: (1) AI-specific requirements layer atop existing industry regulations rather than replacing them; (2) disaggregated testing across subgroups is becoming mandatory across all sectors — demographic groups in lending, patient populations in healthcare, edge cases in driving; (3) comprehensive documentation serves as regulatory compliance, liability defense, and strategic asset simultaneously.

References

  1. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  2. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  3. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. General Data Protection Regulation (GDPR) — Official Text. European Commission (2016). View source
  6. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  7. OECD Principles on Artificial Intelligence. OECD (2019). View source

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.