What Are the BOT AI Risk Management Guidelines?
Thailand's financial services sector faces a defining regulatory inflection point. On 12 September 2025, the Bank of Thailand (BOT) published the final version of its AI Risk Management Guidelines for Financial Service Providers, establishing mandatory requirements for how banks, financial institutions, specialized financial institutions, and payment service providers must govern and manage the risks associated with artificial intelligence systems.
The problem these guidelines address is neither theoretical nor distant. As Thai financial institutions accelerate AI adoption across credit scoring, fraud detection, customer segmentation, and automated advisory services, the governance infrastructure surrounding these systems has failed to keep pace. The BOT's framework closes that gap by imposing structured oversight obligations that apply equally to in-house developed AI systems and third-party AI tools. The underlying principles align closely with Singapore's FEAT framework (Fairness, Ethics, Accountability, Transparency), positioning Thailand within a coherent regional regulatory architecture rather than charting an isolated course.
Who Must Comply
The scope of these guidelines is deliberately broad. Every entity operating under BOT supervision falls within the compliance perimeter. That includes Thai commercial banks and foreign branch operations, specialized financial institutions such as the Government Savings Bank and SME Bank, licensed payment service providers, fintech companies holding BOT licenses, and any other BOT-regulated entity deploying AI in its operations.
Critically, the BOT has adopted a proportionality principle. Compliance expectations scale with the institution's size, operational complexity, and the extent of its AI usage. A regional bank operating a single chatbot faces different expectations than a universal bank running dozens of machine learning models across its lending, trading, and compliance functions. This proportionality is intended to prevent the guidelines from becoming a barrier to innovation at smaller institutions while still ensuring that material AI risks receive adequate governance attention regardless of where they arise.
Two Main Pillars
The guidelines are organized around two structural pillars that together span the full lifecycle of AI governance, from boardroom oversight through technical deployment and ongoing monitoring.
Pillar 1: Governance of AI System Implementation
The first pillar establishes that AI governance is a board-level responsibility. The board must approve AI governance policies and define the institution's risk appetite for AI applications. Senior management carries the obligation to ensure adequate resources and capabilities are allocated to AI risk management, with clear reporting lines and escalation procedures for AI-related issues.
Beneath this executive layer, the guidelines require a comprehensive AI governance framework encompassing policies that cover the full AI lifecycle: development, deployment, monitoring, and retirement. Institutions must implement a formal risk assessment methodology for AI applications and define roles and responsibilities for AI governance across the organization. These structures should not exist in isolation. The BOT expects AI governance to integrate with existing risk management functions and internal audit capabilities, recognizing that AI risk is not a standalone domain but an extension of operational, model, and technology risk.
A particularly consequential requirement is the mandate for a complete AI inventory. Every institution must catalog all AI systems in use, classify each by materiality and risk level, and conduct regular reviews to keep the inventory current. For many institutions, this inventory exercise alone will surface systems and dependencies that leadership did not previously understand at an enterprise level.
Pillar 2: AI System Development and Security Controls
The second pillar shifts from governance architecture to technical execution. It begins with data governance, requiring institutions to establish data quality standards for both AI training data and operational data, implement data lineage tracking, protect personal and sensitive data in accordance with Thailand's Personal Data Protection Act (PDPA), and monitor training data for bias.
Model development must follow documented processes with validation and testing requirements enforced before deployment. The BOT requires comprehensive model documentation covering design decisions, data sources, known limitations, and underlying assumptions. High-risk models must undergo peer review, creating an internal check against the tendency to deploy models that perform well on narrow metrics but carry unexamined risks.
Deployment controls mandate staged rollouts with active monitoring, integration testing against existing systems, user training and change management programs, and documented rollback procedures. Once in production, AI systems require ongoing performance monitoring against defined metrics, statistical detection of data and model drift, regular model revalidation cycles, and incident detection and response capabilities.
The guidelines also address a vulnerability that many institutions underestimate: third-party AI management. Institutions must conduct due diligence on AI vendors and service providers, embed contractual requirements for data handling and model performance, maintain ongoing oversight of third-party AI performance, and develop exit strategies and contingency plans. The days of treating a vendor's AI model as a black box that escapes the institution's own governance obligations are over.
Key Principles
Four principles underpin the entire framework, each carrying specific operational implications that go beyond aspirational language.
Fairness requires that AI systems not produce unfairly biased outcomes. Financial institutions must implement monitoring for bias across demographic groups and customer segments. The BOT identifies credit scoring, lending decisions, and insurance pricing as areas of particular focus, reflecting the outsized harm that algorithmic bias can inflict in these domains.
Ethics demands that AI serve legitimate business purposes and not cause disproportionate harm. This principle extends beyond legal compliance to encompass the broader question of whether an AI application's benefits justify its risks and potential for adverse outcomes.
Accountability mandates clear ownership structures. The board bears ultimate responsibility for AI governance, with senior management ensuring effective day-to-day oversight. This explicit assignment of accountability eliminates the organizational ambiguity that has allowed AI risks to fall between governance gaps at many institutions.
Transparency requires that AI decision-making be explainable to relevant stakeholders. Customers must understand when AI influences decisions affecting them. Regulators must have access to model documentation. This principle will prove particularly challenging for institutions deploying complex ensemble models or deep learning systems where inherent explainability is limited.
Comparison with Regional Financial AI Guidelines
Thailand's guidelines do not exist in regulatory isolation. Across Southeast Asia, financial regulators are converging on similar frameworks, though meaningful differences in approach, scope, and enforcement posture distinguish each jurisdiction.
| Feature | Thailand BOT | Singapore MAS | Malaysia BNM | Indonesia OJK |
|---|---|---|---|---|
| Status | Final (Sep 2025) | Proposed (Nov 2025) | Proposed (Aug 2025) | Mandatory (Dec 2025) |
| Scope | All FSPs | All FIs | Banks, insurers | Banks |
| Principles | FEAT-aligned | FEAT | BNM principles | Pancasila + 6 |
| Third-party AI | Covered | Covered | Covered | Covered |
| GenAI specific | Limited | Yes (MindForge) | Limited | Limited |
| Proportionality | Yes | Yes | Yes | Yes |
Timeline of Regulatory Development and Key Compliance Dates
The Bank of Thailand developed these guidelines through a phased consultative process that financial institutions must understand to contextualize current requirements and anticipate forthcoming obligations.
June 2025: Consultation Paper Release
In June 2025, the BOT published draft AI Risk Management Guidelines for public consultation, with the comment period running from 12 June to 30 June 2025. The compressed consultation window signaled the regulator's intent to move decisively rather than allow prolonged deliberation to delay implementation.
September 2025: Final Guidelines Publication
On 12 September 2025, the BOT released the final AI Risk Management Guidelines organized across the two-pillar structure described above. The final version incorporated industry feedback from the consultation period while maintaining the core governance and technical control requirements. The alignment with internationally recognized responsible AI principles, particularly the FEAT framework, remained intact throughout the revision process.
Comparing BOT Guidelines Against Regional Regulatory Frameworks
Understanding where the BOT's approach diverges from neighboring regulators is essential for institutions operating across multiple ASEAN jurisdictions.
BOT versus MAS (Singapore)
The Monetary Authority of Singapore published its Veritas Initiative assessment methodology alongside the FEAT principles, emphasizing industry self-governance and voluntary adoption. The BOT's framework takes a more prescriptive path, imposing stricter documentation requirements and establishing explicit inspection authority. This reflects Thailand's traditionally more directive regulatory posture across financial services supervision. Institutions operating in both jurisdictions will find the BOT's requirements represent a higher compliance floor in several areas.
BOT versus Bank Negara Malaysia (BNM)
BNM's Discussion Paper on Artificial Intelligence in the Malaysian Financial Sector, published in August 2025, shares substantial structural overlap with the BOT's guidelines. This convergence reflects coordination through ASEAN Financial Innovation Network working groups. The key differences lie in BNM's additional emphasis on Shariah-compliant financial product considerations and cross-border data transfer provisions aligned with Malaysia's Personal Data Protection Act amendments. Institutions with operations in both Thailand and Malaysia will benefit from the structural similarities but must account for these jurisdiction-specific requirements.
BOT versus OJK (Indonesia)
Indonesia's Otoritas Jasa Keuangan published POJK Regulation 2025 on Technology-Based Lending and Digital Financial Innovation, incorporating generative technology provisions. The OJK maintains separate regulatory tracks for banking, insurance, and capital markets applications, creating a more fragmented compliance landscape. The BOT, by contrast, consolidates oversight through unified guidelines applicable across all licensed financial institution categories, offering a simpler compliance architecture for institutions that operate across multiple financial service lines.
Practical Implementation Roadmap for Financial Institutions
Pertama Partners recommends Thai financial institutions execute five preparatory workstreams to achieve compliance readiness.
Step 1: Governance Architecture Review
The starting point is establishing or strengthening board-level AI oversight. This means constituting a governance subcommittee with a quarterly reporting cadence and documented escalation thresholds for model risk events. AI risk appetite must be formally defined and integrated with existing enterprise risk frameworks. Governance responsibilities should map across the three lines of defense, with first-line business units owning AI risk within their operations, second-line risk functions providing independent challenge, and third-line audit providing periodic assurance.
Step 2: AI System Inventory and Classification
Every deployed system meeting the BOT's definition of an artificial intelligence application must be cataloged. This includes chatbots built on platforms such as Dialogflow or Amazon Lex, automated credit scoring models, fraud detection algorithms, and customer segmentation engines. Each system requires classification by risk level and materiality, which then determines the intensity of governance controls applied. For most institutions, this inventory exercise will be the single most revealing step, exposing AI dependencies and risk concentrations that had previously escaped enterprise-level visibility.
Step 3: Model Validation and Lifecycle Controls
Independent validation procedures must cover three phases: initial deployment approval, ongoing performance monitoring using statistical drift detection through tools such as Evidently, NannyML, or Fiddler, and periodic recalibration assessments. Development processes require formal documentation, and high-risk models need peer review before production deployment. Rollback procedures and model retirement criteria should be defined before deployment, not improvised after a model begins to degrade.
Step 4: Data Governance and Fairness Mechanisms
Training data provenance must be documented, quality assurance procedures established, and lineage tracking implemented in a manner compatible with PDPA enforcement standards administered by the PDPA Committee. Fairness metrics relevant to each AI application must be defined, with bias monitoring implemented for credit scoring, lending, and pricing models. Institutions should also establish mechanisms for customers to understand and contest AI-driven decisions, converting the transparency principle from a policy statement into an operational capability.
Step 5: Third-Party AI Management
Vendor contracts require review against the BOT's due diligence and data handling requirements. Ongoing monitoring of vendor AI performance must become a structured process rather than a periodic check. Contingency plans for vendor failure or exit scenarios should be documented and tested. Vendor compliance with both PDPA requirements and BOT guidelines must be contractually enforceable, shifting accountability from informal expectations to binding obligations.
Related Regulations
Thailand's AI risk management guidelines operate within a broader regulatory ecosystem. The Thailand PDPA establishes underlying data protection requirements for all AI data processing. The Singapore MAS AI Guidelines provide a comparable framework for financial AI governance across the Causeway. Malaysia BNM's AI Guidelines impose similar requirements in a neighboring market with significant cross-border banking activity. Indonesia's OJK AI Guidelines represent the region's most explicitly mandatory financial services AI governance regime. The ASEAN AI Governance Guide serves as the regional framework informing all four national regulators, creating a foundation for eventual regulatory harmonization across Southeast Asia's financial services sector.
Financial institutions with cross-border operations should treat these frameworks not as isolated compliance exercises but as components of an emerging regional standard. Building governance infrastructure that satisfies the most demanding requirements across jurisdictions will prove more efficient than maintaining parallel compliance programs tailored to each national regulator.
Common Questions
Yes. The BOT released the final version in September 2025, and they apply to all financial service providers under BOT supervision. Implementation expectations are proportionate to the institution's size and AI usage, but all regulated entities must have basic AI governance in place.
Yes. The guidelines explicitly cover third-party AI tools. Financial institutions remain responsible for AI governance even when using vendor-provided AI systems. This includes due diligence, contractual protections, ongoing monitoring, and exit strategies.
They are closely aligned. Both use FEAT-aligned principles, require board oversight, mandate lifecycle controls, and expect proportionate implementation. Key differences: BOT guidelines were finalized earlier (September 2025 vs MAS still in consultation), and MAS has more explicit GenAI provisions through Project MindForge.
Financial institutions must monitor AI systems for unfair bias across demographic groups and customer segments. This is particularly important for credit scoring, lending decisions, and insurance pricing — areas where AI bias could have significant financial impact on customers.
References
- AI Risk Management Guidelines for Financial Service Providers. Bank of Thailand (BOT) (2025). View source
- Thailand Issues AI Risk Management Guidelines for Financial Service Providers. Tilleke & Gibbins (2025). View source
- Thailand Drafts AI Risk Management Guidelines for Financial Service Providers. Tilleke & Gibbins (2025). View source
- Consultation Paper on Proposed Guidelines on Artificial Intelligence Risk Management for Financial Institutions. Monetary Authority of Singapore (MAS) (2025). View source
- Discussion Paper — Artificial Intelligence in the Malaysian Financial Sector. Bank Negara Malaysia (BNM) (2025). View source
- Bank of Thailand Policy on Risk Management of AI Systems — Consultation. Digital Policy Alert (2025). View source

