What Are the BOT AI Risk Management Guidelines?
On 12 September 2025, the Bank of Thailand (BOT) released the final version of its AI Risk Management Guidelines for Financial Service Providers. These guidelines establish mandatory requirements for how banks, financial institutions, special financial institutions, and payment service providers govern and manage AI risks.
The guidelines build on principles aligned with Singapore's FEAT framework (Fairness, Ethics, Accountability, Transparency) and apply to both in-house developed AI systems and third-party AI tools.
Who Must Comply
The guidelines apply to all entities under BOT supervision:
- Commercial banks (Thai and foreign branches)
- Specialized financial institutions (Government Savings Bank, SME Bank, etc.)
- Payment service providers
- Licensed fintech companies
- Other BOT-regulated entities using [AI
Compliance](/glossary/ai-compliance) expectations are proportionate to the institution's size, complexity, and extent of AI usage.
Two Main Pillars
Pillar 1: Governance of AI System Implementation
Board and senior management oversight:
- Board must approve AI governance policies and risk appetite
- Senior management must ensure adequate resources and capabilities for AI risk management
- Clear reporting lines and escalation procedures for AI issues
AI governance framework:
- Comprehensive policies covering AI development, deployment, monitoring, and retirement
- Risk assessment methodology for AI applications
- Roles and responsibilities for AI governance across the organization
- Integration with existing risk management and internal audit functions
AI inventory and classification:
- Complete inventory of all AI systems in use
- Classification by materiality and risk level
- Regular review and update of the inventory
Pillar 2: AI System Development and Security Controls
Data governance:
- Data quality standards for AI training and operational data
- Data lineage tracking
- Protection of personal and sensitive data in accordance with Thailand's PDPA
- Bias monitoring in training data
Model development:
- Documented development processes
- Validation and testing requirements before deployment
- Model documentation including design, data sources, limitations, and assumptions
- Peer review for high-risk models
Deployment controls:
- Staged deployment with monitoring
- Integration testing with existing systems
- User training and change management
- Rollback procedures
Ongoing monitoring:
- Performance monitoring against defined metrics
- Data and model drift detection
- Regular model revalidation
- Incident detection and response
Third-party AI management:
- Due diligence on AI vendors and service providers
- Contractual requirements for data handling and model performance
- Ongoing oversight of third-party AI performance
- Exit strategies and contingency plans
Key Principles
The guidelines are built on principles that closely align with Singapore's FEAT framework:
Fairness: AI systems should not produce unfairly biased outcomes. Financial institutions must monitor for bias across demographic groups and customer segments. Credit scoring, lending decisions, and insurance pricing are specific areas of focus.
Ethics: AI should be used responsibly and in accordance with ethical standards. This includes ensuring AI applications serve legitimate business purposes and do not cause disproportionate harm.
Accountability: Clear accountability structures must exist. The board bears ultimate responsibility, with senior management ensuring day-to-day governance.
Transparency: AI decision-making should be explainable to relevant stakeholders. Customers should understand when AI influences decisions affecting them. Regulators should have access to model documentation.
Comparison with Regional Financial AI Guidelines
| Feature | Thailand BOT | Singapore MAS | Malaysia BNM | Indonesia OJK |
|---|---|---|---|---|
| Status | Final (Sep 2025) | Proposed (Nov 2025) | Proposed (Aug 2025) | Mandatory (Dec 2025) |
| Scope | All FSPs | All FIs | Banks, insurers | Banks |
| Principles | FEAT-aligned | FEAT | BNM principles | Pancasila + 6 |
| Third-party AI | Covered | Covered | Covered | Covered |
| GenAI specific | Limited | Yes (MindForge) | Limited | Limited |
| Proportionality | Yes | Yes | Yes | Yes |
How to Comply
Step 1: Governance Structure
- Establish or update board-level AI oversight
- Define AI risk appetite and governance policies
- Assign AI governance responsibilities across the three lines of defense
- Integrate AI governance with existing risk management
Step 2: AI Inventory
- Catalog all AI systems in use (in-house and third-party)
- Classify each by risk level and materiality
- Prioritize governance efforts accordingly
Step 3: Lifecycle Controls
- Implement data governance standards for AI data
- Establish model development and validation processes
- Create deployment and monitoring procedures
- Define model retirement criteria
Step 4: Fairness and Transparency
- Define fairness metrics relevant to your AI applications
- Implement bias monitoring for credit scoring, lending, and pricing
- Establish mechanisms for customers to understand and contest AI decisions
- Document model decisions and their rationale
Step 5: Third-Party Management
- Review AI vendor contracts and due diligence
- Establish ongoing monitoring of vendor AI performance
- Develop contingency plans for vendor issues
- Ensure vendor compliance with PDPA and BOT requirements
Related Regulations
- Thailand PDPA: Underlying data protection requirements for all AI data processing
- Singapore MAS AI Guidelines: Comparable framework for financial AI governance
- Malaysia BNM AI Guidelines: Similar requirements in neighboring market
- Indonesia OJK AI Guidelines: Mandatory financial services AI governance
- ASEAN AI Governance Guide: Regional framework informing all financial regulators
Frequently Asked Questions
Yes. The BOT released the final version in September 2025, and they apply to all financial service providers under BOT supervision. Implementation expectations are proportionate to the institution's size and AI usage, but all regulated entities must have basic AI governance in place.
Yes. The guidelines explicitly cover third-party AI tools. Financial institutions remain responsible for AI governance even when using vendor-provided AI systems. This includes due diligence, contractual protections, ongoing monitoring, and exit strategies.
They are closely aligned. Both use FEAT-aligned principles, require board oversight, mandate lifecycle controls, and expect proportionate implementation. Key differences: BOT guidelines were finalized earlier (September 2025 vs MAS still in consultation), and MAS has more explicit GenAI provisions through Project MindForge.
Financial institutions must monitor AI systems for unfair bias across demographic groups and customer segments. This is particularly important for credit scoring, lending decisions, and insurance pricing — areas where AI bias could have significant financial impact on customers.
References
- AI Risk Management Guidelines for Financial Service Providers. Bank of Thailand (BOT) (2025)
- Thailand Issues AI Risk Management Guidelines for Financial Service Providers. Tilleke & Gibbins (2025). View source
