The AI regulatory landscape is moving faster than most organizations expect. What was guidance in 2024 is becoming law in 2026. What was unregulated is now subject to oversight. Businesses that wait for final regulations before preparing may find themselves scrambling to comply.
This guide examines emerging regulatory trends and helps you prepare for what's coming—with a focus on Singapore, Malaysia, and Thailand, while considering global influences.
Executive Summary
- AI regulation is accelerating globally, moving from soft guidance to binding requirements
- Key trends: mandatory transparency, accountability requirements, sector-specific rules, risk-based frameworks, audit/certification requirements
- ASEAN frameworks are emerging, with Singapore leading and Malaysia and Thailand developing their approaches
- The EU AI Act is influencing global standards and will affect businesses operating internationally
- Enforcement is materializing—regulators are building capacity to hold organizations accountable
- Prepare now for requirements taking effect in 2026-2028; reactive compliance will be more expensive
Why This Matters Now
Regulations always lag technology—but the gap is closing. GenAI exploded in 2023; regulatory responses are arriving in 2025-2026. The lag is shorter than previous technology waves.
Early movers gain advantage. Organizations building governance infrastructure now will have lower compliance costs and smoother transitions than those who wait.
Reactive compliance is expensive. Retrofitting governance into AI systems is harder and costlier than building it in from the start.
Board and investor scrutiny is increasing. "What's our AI governance?" is now a standard board question. "We're waiting to see what regulations require" is no longer an acceptable answer.
Major Regulatory Trends
Trend 1: Mandatory AI Transparency
What's happening: Regulators are requiring organizations to disclose AI use and explain how AI systems make decisions.
Forms this takes:
- Disclosure that AI is being used in interactions
- Explanations of AI decision logic
- Notification when AI significantly affects individuals
- Public registries of high-risk AI systems
Why it's happening: Public concern about AI "black boxes" making important decisions without accountability.
Prepare by:
- Documenting how AI systems make decisions
- Building explanation capabilities into AI deployments
- Developing disclosure language and processes
Trend 2: Algorithmic Accountability
What's happening: Organizations are being held responsible for AI outcomes, including unintended consequences.
Forms this takes:
- Requirements for AI impact assessments
- Accountability for AI discrimination or bias
- Liability frameworks for AI-caused harms
- Audit and testing requirements
Why it's happening: AI mistakes at scale can harm many people quickly. Society wants someone accountable.
Prepare by:
- Conducting AI risk and impact assessments
- Testing AI systems for bias and fairness
- Establishing clear accountability for AI systems
- Documenting AI decision processes for auditability
Trend 3: Sector-Specific AI Rules
What's happening: Beyond general AI regulation, specific sectors face tailored requirements.
High-regulation sectors:
- Financial services (credit decisions, fraud detection, trading)
- Healthcare (diagnostics, treatment recommendations)
- Employment (hiring, performance management)
- Education (assessment, student support)
Why it's happening: Sector regulators understand domain-specific risks better than general technology regulators.
Prepare by:
- Identifying sector-specific guidance that applies to your AI use
- Engaging with sector regulators on AI questions
- Anticipating sector-specific requirements even where not yet mandated
Trend 4: Risk-Based Regulatory Frameworks
What's happening: Regulations are adopting risk-tiering—higher requirements for higher-risk AI.
Common risk tiers:
- Unacceptable risk: Prohibited AI uses (e.g., social scoring, subliminal manipulation)
- High risk: Significant requirements (AI affecting fundamental rights, safety-critical systems)
- Limited risk: Transparency obligations
- Minimal risk: Little or no specific requirements
Why it's happening: Recognizes that not all AI carries equal risk. Focuses regulatory resources where harm is greatest.
Prepare by:
- Assessing your AI systems against risk frameworks
- Prioritizing governance for highest-risk AI
- Anticipating which systems will face more stringent requirements
Trend 5: Cross-Border Harmonization Efforts
What's happening: Regional and international bodies are working to align AI regulations.
Examples:
- ASEAN Digital Economy Framework
- OECD AI Principles
- Global Partnership on AI
- Bilateral agreements on AI governance
Why it's happening: AI is inherently cross-border. Regulatory fragmentation creates compliance complexity and competitive distortions.
Prepare by:
- Designing governance to meet highest applicable standards
- Monitoring regional harmonization developments
- Participating in industry dialogue on standards
Trend 6: AI Audit and Certification Frameworks
What's happening: Third-party assessment and certification of AI systems is emerging.
Forms this takes:
- Voluntary certification programs
- Audit requirements for high-risk AI
- Standards bodies developing AI certification criteria
- Professional certifications for AI governance
Why it's happening: Provides assurance to regulators, customers, and stakeholders that AI meets standards.
Prepare by:
- Following development of relevant certification standards
- Building documentation that supports third-party assessment
- Considering voluntary certification for competitive differentiation
Regional Outlook: ASEAN Focus
Singapore
Current state: Most developed AI governance framework in ASEAN.
Key frameworks:
- Model AI Governance Framework (voluntary, but influential)
- IMDA AI Verify (testing toolkit and governance framework)
- Sector-specific guidance (MAS for financial services)
What's coming (2026-2028):
- Likely movement toward mandatory requirements for high-risk AI
- Financial services AI regulation expected to strengthen
- AI Verify may become expected standard for certain AI uses
- Potential AI-specific legislation
Implications: Organizations operating in Singapore should align with Model AI Governance Framework now; expect it to become more binding.
Malaysia
Current state: Developing AI governance framework.
Key developments:
- National AI Roadmap
- PDPA amendments potentially addressing AI
- MDEC guidance on AI development
- Sector-specific considerations emerging
What's coming (2026-2028):
- Expected AI governance guidelines
- PDPA updates likely to address AI more explicitly
- Sector regulators (Bank Negara, etc.) expected to issue guidance
- Potential AI-specific legislation under development
Implications: Less immediate regulatory pressure than Singapore, but should prepare for requirements emerging in 2-3 years.
Thailand
Current state: Foundational frameworks in development.
Key developments:
- PDPA enacted and being enforced
- DEPA AI governance initiatives
- National AI Strategy
- Draft AI ethics guidelines
What's coming (2026-2028):
- Expected AI governance framework from DEPA
- Potential AI-specific legislation
- PDPA enforcement affecting AI use of personal data
- Sector-specific guidance likely to emerge
Implications: Compliance with PDPA for AI is immediate priority. AI-specific governance should follow Singapore model as baseline.
ASEAN Regional
Harmonization efforts:
- ASEAN Digital Economy Framework
- ASEAN Digital Masterplan 2025
- Discussions on cross-border AI governance
Expected developments:
- Movement toward regional AI principles
- Mutual recognition frameworks possible
- Cross-border data flow frameworks affecting AI
Timeline: Anticipated Regulatory Milestones (2026-2028)
| Period | Singapore | Malaysia | Thailand | Global |
|---|---|---|---|---|
| Early 2026 | Enhanced AI Verify adoption; MAS AI guidance updates | AI governance guidance expected | DEPA AI framework expected | EU AI Act high-risk obligations begin |
| Late 2026 | Potential mandatory requirements for high-risk AI | PDPA AI amendments likely | Sector guidance emerging | EU AI Act enforcement ramp-up |
| 2027 | AI-specific legislation possible | AI legislation under development | AI legislation possible | Global standards converging |
| 2028 | Mature enforcement landscape | Regulatory framework maturing | Growing enforcement capacity | International harmonization progress |
Note: This timeline represents informed projections based on current developments. Actual regulatory timing may vary.
How to Prepare
Immediate Actions (Now)
Establish baseline governance:
- AI inventory: Know what AI you're using
- Risk assessment: Understand risks of current AI systems
- Basic policies: AI acceptable use, data handling
- Accountability: Assign AI governance ownership
Monitor developments:
- Follow regulatory announcements in your jurisdictions
- Track sector regulator guidance
- Engage with industry associations on AI policy
Near-Term Actions (Next 12 Months)
Build governance infrastructure:
- Formal AI governance framework
- Risk-based assessment processes
- Documentation and audit trails
- Training for AI users and deployers
Assess highest-risk AI:
- Identify AI that would be "high-risk" under emerging frameworks
- Conduct impact assessments
- Implement enhanced controls
Medium-Term Actions (12-24 Months)
Prepare for compliance:
- Gap analysis against expected regulations
- Remediation planning for gaps
- Budget for compliance activities
- Consider certification where relevant
Build flexibility:
- Governance processes that can adapt to new requirements
- Modular documentation that can be expanded
- Technical capabilities for explanation and audit
Checklist: Regulatory Readiness
Awareness
- Identified applicable jurisdictions
- Monitoring regulatory developments
- Understand current requirements
- Tracking emerging trends
Foundation
- AI systems inventoried
- Risk levels assessed
- Governance ownership assigned
- Basic policies in place
Documentation
- AI system documentation maintained
- Decision logic documented
- Risk assessments documented
- Audit trails in place
Capability
- Can explain AI decisions
- Can audit AI systems
- Can respond to regulatory inquiry
- Can adapt to new requirements
Frequently Asked Questions
When will AI regulations take effect?
Some already have. Singapore's Model AI Governance Framework, while voluntary, sets expectations. The EU AI Act has phased implementation through 2027. ASEAN countries are expected to have significant frameworks by 2027-2028.
Will ASEAN countries follow the EU AI Act?
Not directly, but EU influence is significant. Singapore's risk-based approach parallels EU thinking. Businesses operating internationally may need to meet EU standards regardless of local requirements.
What industries face the most regulatory pressure?
Financial services, healthcare, and employment are highest priority. AI affecting consumer rights, safety-critical applications, and decisions about individuals faces most scrutiny.
How do we stay updated on changes?
Monitor: regulatory body announcements, industry association updates, legal advisories, and guidance from consultancies specializing in AI governance. Consider joining industry working groups.
Should we comply with EU rules if we're not in EU?
If you have EU customers or operations, yes. EU requirements apply to AI systems that affect EU citizens, regardless of where deployed. Even without EU exposure, EU standards influence global expectations.
What if regulations are unclear?
Document your interpretation and rationale. Follow industry best practices. Engage with regulators where possible. Build flexibility to adapt as clarity emerges.
Conclusion
AI regulation isn't coming—it's arriving. The trajectory is clear: more requirements, more enforcement, more accountability. Organizations that prepare now will have advantage over those who wait.
You don't need to predict exactly what regulations will say. Focus on good governance practices: know what AI you have, assess risks, document decisions, ensure accountability. These fundamentals will serve you regardless of specific regulatory requirements.
The cost of preparing is modest. The cost of scrambling to comply after regulations take effect—or worse, being enforcement's example case—is much higher.
Book an AI Readiness Audit
Uncertain about your regulatory readiness? Our AI Readiness Audit assesses your current state against emerging requirements and provides a prioritized preparation roadmap.
Disclaimer
Regulatory predictions are inherently uncertain. This article reflects current understanding as of publication and should not be relied upon as legal advice. Regulatory timing, scope, and requirements may differ from projections. Consult qualified legal counsel for jurisdiction-specific compliance guidance.
References
- Singapore IMDA AI Governance frameworks
- Malaysia National AI Roadmap
- Thailand DEPA AI initiatives
- EU AI Act text and implementation guidance
- OECD AI Principles
- ASEAN Digital Economy Framework
- Industry regulatory analyses
Frequently Asked Questions
Expect EU AI Act full implementation, expanded ASEAN frameworks, sector-specific rules in financial services and healthcare, and requirements for explainability and transparency globally.
Build flexible compliance frameworks, implement good governance practices now, maintain documentation, and stay informed about developments in jurisdictions where you operate.
The EU takes a comprehensive regulatory approach, US focuses on sector-specific rules and enforcement, and ASEAN emphasizes principles-based governance with growing alignment.
References
- Singapore IMDA AI Governance frameworks. Singapore IMDA AI Governance frameworks
- Malaysia National AI Roadmap. Malaysia National AI Roadmap
- Thailand DEPA AI initiatives. Thailand DEPA AI initiatives
- EU AI Act text and implementation guidance. EU AI Act text and implementation guidance
- OECD AI Principles. OECD AI Principles
- ASEAN Digital Economy Framework. ASEAN Digital Economy Framework
- Industry regulatory analyses. Industry regulatory analyses

