Thailand's AI Regulatory Landscape
Thailand is building one of Southeast Asia's most structured approaches to AI governance, and organizations operating in the country need to understand what that means for their technology investments. The regulatory architecture rests on three pillars: the Personal Data Protection Act (PDPA), which has been enforceable since June 2022; a draft AI law taking shape under a risk-based framework with binding rules expected in 2026; and the Bank of Thailand's AI Risk Management Guidelines, which became mandatory for financial institutions in September 2025.
Beyond legislation, Thailand has invested in institutional infrastructure. The Electronic Transactions Development Agency (ETDA) now houses a dedicated AI Governance Center (AIGC), operates a regulatory sandbox for AI applications, and is executing against a National AI Strategy running through 2027. For companies deploying AI across Thai operations, the signal is clear: governance is not aspirational here. It is becoming operational.
Thailand PDPA and AI
Overview
The Personal Data Protection Act B.E. 2562 (2019) is Thailand's comprehensive data protection statute. It applies to all processing of personal data, and that scope explicitly includes data flowing through AI systems. Any organization training models on Thai consumer data, running automated decision-making, or deploying AI tools that ingest personal information falls under its requirements.
Key Requirements for AI Systems
The PDPA mandates that every instance of personal data processing rest on a lawful basis, whether that is consent, contractual necessity, legal obligation, vital interests, public interest, or legitimate interests. For AI deployments, consent requirements carry particular weight. Consent must be freely given, specific, informed, and unambiguous. In practice, this means organizations need to explain clearly how personal data will be used within AI systems, separate AI-specific processing consent from general service consent, and honor withdrawal requests at any time.
Sensitive data processing demands an even higher bar. Explicit consent is required before AI systems can process categories including racial or ethnic origin, political opinions, religious beliefs, genetic data, biometric data, health data, sexual orientation, and criminal records. This requirement has direct implications for companies deploying facial recognition, voice analysis, or health-related AI tools in the Thai market.
The law also requires data protection impact assessments for processing likely to create high risks to individuals' rights and freedoms. Most AI systems performing automated decision-making on personal data should undergo such an assessment. On cross-border transfers, personal data can only leave Thailand if the destination country maintains adequate data protection standards or alternative safeguards are in place, a constraint that affects cloud-based AI architectures routing data through international infrastructure.
Penalties
The PDPA's enforcement regime carries real financial consequences. Administrative fines can reach up to THB 5 million (approximately USD 140,000). Civil liability extends to actual damages plus punitive damages of up to twice the actual damages. Criminal penalties include up to one year of imprisonment and fines of up to THB 1 million.
These are not theoretical provisions. In August 2024, the Personal Data Protection Committee issued its first administrative penalty, a THB 7 million fine for PDPA violations. That enforcement action signaled a shift from education-focused regulation to active compliance monitoring, and it should prompt organizations to treat PDPA obligations with the same urgency they bring to GDPR compliance in European markets.
Draft AI Law (2025-2026)
Current Status
Thailand's Ministry of Digital Economy and Society (MDES) and the Electronic Transactions Development Agency (ETDA) are developing a dedicated AI law that will move the country from voluntary principles to binding regulation. As Tilleke & Gibbins noted in their analysis, draft principles were issued in 2025 and remain non-binding for now. However, according to Norton Rose Fulbright's assessment, 2026 is the targeted deadline for a formalized framework. The approach draws heavily from the EU AI Act's risk-based model, adapted for Thailand's regulatory context.
Expected Framework
The published principles outline a tiered risk classification system. High-risk AI, covering healthcare, law enforcement, critical infrastructure, and financial services, will face the most stringent requirements: mandatory risk assessments before deployment, ongoing monitoring obligations, transparency requirements for affected individuals, and human oversight mandates for critical decisions. Medium-risk systems, primarily customer-facing AI with moderate impact, will carry a lighter but still meaningful compliance burden. Low-risk administrative and operational AI will face minimal additional requirements.
AI Governance Center (AIGC)
ETDA's AI Governance Center is already functioning as a practical resource for organizations preparing for the coming regulatory shift. The center provides technical standards for AI governance, readiness assessments that help companies benchmark their current capabilities, implementation support for the AI ethics guidelines, and coordination with the regulatory sandbox program.
AI Regulatory Sandbox
ETDA's AI Innovation Testing Center offers a structured pathway for organizations seeking to deploy novel AI applications. The sandbox provides a controlled testing environment with regulatory guidance, reduces compliance requirements during the testing phase in exchange for transparency and data sharing, and creates a defined route from experimental deployment to full-scale operation. For companies exploring innovative AI use cases in Thailand, the sandbox represents an opportunity to shape regulatory expectations while de-risking deployment.
How to Comply
Step 1: PDPA Compliance
The foundation of any AI compliance program in Thailand starts with the PDPA. Organizations should begin by mapping all personal data processing across their AI systems, establishing a valid lawful basis for each processing activity, and building consent mechanisms specifically designed for AI data use. Data protection impact assessments are essential for high-risk AI deployments, and cross-border transfer safeguards must be in place before data moves outside Thailand's borders. This is not a one-time exercise. As AI systems evolve and ingest new data sources, the compliance mapping must evolve with them.
Step 2: AI Ethics Alignment
With binding AI legislation on the horizon, organizations that align with Thailand's published AI ethics principles now will find the transition to mandatory compliance far less disruptive. This means reviewing AI systems against the existing principles, implementing fairness monitoring and bias mitigation processes, establishing transparency mechanisms that can explain AI decisions to affected individuals, and creating accountability structures that assign clear ownership for AI governance outcomes.
Step 3: Sector-Specific Compliance
Several sectors face additional requirements beyond the baseline. Financial services firms must implement the Bank of Thailand's AI Risk Management Guidelines, which became mandatory in September 2025. Healthcare organizations need to layer existing health data regulations onto their AI applications. Telecommunications providers must comply with NBTC requirements governing AI in communications services. Companies operating across multiple sectors should expect to manage overlapping compliance obligations.
Step 4: Prepare for AI Legislation
The window for proactive preparation is narrowing. Organizations should actively monitor announcements from MDES and ETDA, conduct a comprehensive inventory of their AI systems with preliminary risk classifications, and begin building governance infrastructure flexible enough to absorb binding requirements when they arrive. Companies with innovative AI applications should consider engaging with ETDA's regulatory sandbox, both to test their systems under regulatory guidance and to contribute to the development of workable standards.
Related Regulations
Thailand's regulatory trajectory does not exist in isolation. Singapore's PDPA and Model AI Governance Framework represent the most mature governance approach in the region and offer a useful benchmark for what Thailand's framework may eventually resemble. Indonesia's PDP Law introduces GDPR-style data protection obligations that mirror many of Thailand's PDPA requirements. Malaysia's PDPA 2010 presents a comparable and similarly evolving data protection landscape. Across the region, the ASEAN Guide on AI Governance provides the multilateral framework that Thailand's national approach is designed to align with. Organizations operating across Southeast Asian markets should build compliance architectures that accommodate this converging but still fragmented regulatory environment.
Common Questions
Yes. The PDPA applies to all processing of personal data regardless of the technology used. AI systems that collect, process, store, or generate personal data must comply. This includes AI training data, real-time processing, and AI-generated outputs containing personal data.
Thailand has targeted 2026 for formalizing its AI regulatory framework. Draft principles were issued in 2025 by MDES and ETDA. The final form — whether standalone legislation, regulation, or enforceable guidelines — has not been confirmed.
Administrative fines up to THB 5 million (~USD 140,000), civil liability including actual damages plus punitive damages up to twice the actual amount, and criminal penalties of up to 1 year imprisonment and/or THB 1 million fine for serious violations.
Yes. ETDA's AI Innovation Testing Center provides a regulatory sandbox for AI applications. Companies can test AI systems under relaxed regulatory conditions in exchange for transparency and data sharing. This is particularly useful for innovative AI applications where regulatory requirements are unclear.
Yes. The BOT AI Risk Management Guidelines (September 2025) apply to all financial institutions, special financial institutions, and payment service providers. Licensed fintech companies fall under BOT supervision and must comply. Implementation can be proportionate to the firm's size and AI usage.
References
- Data Protection Laws in Thailand — PDPA Overview. DLA Piper (2024). View source
- Thailand Resumes Development of AI Regulatory Framework. Tilleke & Gibbins (2025). View source
- Thailand's Draft AI Law: A New Era for Governance and Innovation. Norton Rose Fulbright (2025). View source
- PDPC Issues First Administrative Penalty Under PDPA — THB 7M Fine. Nishimura & Asahi (2024). View source
- Key Developments in Thailand's PDPA Regulations. IAPP (2024). View source
- Thailand AI Regulation: Ethics, ETDA, Thailand 4.0. Nemko Digital (2025). View source
- Thailand PDPA Crackdown 2025: Major Fines and Lessons. DLA Piper (2025). View source

