
The Bank of Thailand (BOT) released mandatory AI Risk Management Guidelines in September 2025 for all financial service providers. Built on FEAT-aligned principles, they require governance structures, lifecycle controls, and fairness monitoring.

The Philippines National Privacy Commission issued Advisory Guidelines on AI in December 2024, requiring organizations to identify and limit algorithmic bias, prohibit AI washing, and comply with the Data Privacy Act for all AI data processing.

Vietnam's Law on Artificial Intelligence, effective March 2026, is the first standalone binding AI law in Southeast Asia. It introduces risk-based classification, registration requirements, and penalties up to VND 2 billion for non-compliance.

Thailand's PDPA imposes strict data protection requirements on AI systems. With a draft AI law expected in 2026 and new BOT AI guidelines for financial services, companies must prepare for an increasingly regulated environment.

Indonesia's Personal Data Protection Law (UU PDP), fully effective since October 2024, is modeled on GDPR and applies to all AI systems processing personal data. With mandatory AI regulations expected in early 2026, companies must comply now.

Malaysia's PDPA amendments (effective June 2025) introduce mandatory DPO requirements, breach notifications, and data portability. Combined with the new AIGE Guidelines, companies using AI must adapt their data practices.

Singapore's Model AI Governance Framework has evolved through three editions — Traditional AI (2020), Generative AI (2024), and Agentic AI (2026). Together they form the most comprehensive voluntary AI governance framework in Asia.

The Monetary Authority of Singapore (MAS) released AI Risk Management Guidelines in November 2025 for all financial institutions. Built on the FEAT principles, these guidelines establish comprehensive AI governance requirements for banks, insurers, and fintechs.

Singapore's Personal Data Protection Act (PDPA) applies to all AI systems processing personal data. With the 2024 PDPC Advisory Guidelines on AI, companies now have specific guidance on consent, anonymization, and responsible data use for AI development.

The ASEAN Guide on AI Governance and Ethics provides a voluntary framework for responsible AI across all 10 member states. Expanded in 2025 to cover Generative AI, it is shaping how businesses deploy AI across the region.

California SB 53 requires frontier AI model developers to publish safety frameworks, report incidents, and protect whistleblowers. If you develop large AI models, here is what you need to know.

The Texas Responsible AI Governance Act (TRAIGA) took effect January 1, 2026. It applies to any business serving Texas residents and introduces AI disclosure requirements, prohibited uses, and governance standards.
Book a complimentary AI Readiness Audit to identify opportunities and risks specific to your organization.
Book an AI Readiness Audit