Back to Insights
AI Compliance & RegulationPoint of View

AI Regulation Trends: What to Expect in the Next 2-3 Years

December 31, 202511 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Legal/ComplianceConsultantCTO/CIOBoard MemberCISOIT ManagerCEO/FounderCHRO

Anticipate coming AI regulations with trend analysis for Singapore, Malaysia, and Thailand. Timeline of expected milestones and preparation guidance.

Summarize and fact-check this article with:
Muslim Woman Lawyer Hijab - ai compliance & regulation insights

Key Takeaways

  • 1.Anticipate major AI regulatory developments coming in 2-3 years
  • 2.Understand the EU AI Act implementation timeline and implications
  • 3.Prepare for sector-specific AI regulations in healthcare and finance
  • 4.Build adaptive compliance frameworks for evolving requirements
  • 5.Position your organization ahead of regulatory changes

The AI regulatory landscape is evolving at a pace that has caught many organizations off guard. What existed as voluntary guidance in 2024 is crystallizing into binding law in 2026. What operated in an unregulated vacuum is now subject to formal oversight. For business leaders accustomed to the leisurely regulatory cycles that followed previous technology waves, this acceleration demands a fundamentally different posture. Waiting for final regulations before preparing is no longer a viable strategy. It is a liability.

This analysis examines the regulatory trends shaping AI governance across Southeast Asia and the broader global landscape, with particular attention to Singapore, Malaysia, and Thailand, and offers a framework for organizational readiness over the next two to three years.

Why This Matters Now

Regulation has always trailed technology, but the gap is narrowing with unusual speed. Generative AI reached mainstream adoption in 2023, and substantive regulatory responses began arriving in 2025 and 2026. The interval between technological disruption and governmental response is measurably shorter than it was for cloud computing, social media, or mobile payments.

This compressed timeline creates a clear strategic divide. Organizations that invest in governance infrastructure today will absorb compliance costs incrementally and transition smoothly as requirements formalize. Those that defer will face the far more expensive proposition of retrofitting governance into systems designed without it. The difference is not marginal. According to the International Association of Privacy Professionals (IAPP), organizations that embed compliance into system design from the outset spend roughly 30 to 50 percent less on governance over a five-year period compared to those that retrofit.

The question from boards and investors has also shifted. "What is our AI governance posture?" has become a standard agenda item in boardrooms across the region. Responding with "We are waiting to see what regulators require" no longer satisfies institutional scrutiny.

Mandatory AI Transparency

Regulators worldwide are converging on a common demand: organizations must disclose when and how they use AI to make decisions that affect individuals. This takes several forms, from requiring notification that a customer is interacting with an AI system, to mandating explanations of the decision logic behind automated outcomes, to establishing public registries of high-risk AI deployments.

The momentum behind mandatory transparency reflects a simple reality. Public tolerance for opaque algorithmic decision-making is eroding. The European Commission's 2024 Eurobarometer survey found that 88 percent of EU citizens believe organizations should be required to inform them when AI is used in decisions affecting their lives. While ASEAN-specific polling data remains limited, regulatory trajectories in the region suggest policymakers share this concern.

Organizations that begin documenting their AI decision processes now, building explanation capabilities into new deployments, and developing clear disclosure language will find compliance straightforward when requirements formalize. Those that treat their AI systems as inscrutable black boxes will face a far steeper path.

Algorithmic Accountability

A parallel trend is the expansion of organizational liability for AI outcomes, including unintended consequences. Regulators are increasingly requiring formal AI impact assessments before deployment, establishing accountability frameworks for algorithmic discrimination or bias, developing liability structures for AI-caused harms, and mandating ongoing audit and testing regimes.

The logic driving this shift is straightforward. AI mistakes propagate at scale. A flawed credit-scoring model does not affect one applicant; it affects thousands before the error surfaces. The Monetary Authority of Singapore (MAS) acknowledged this dynamic in its 2024 guidance on AI in financial services, noting that algorithmic errors can produce "systemic harm at a speed and scale not seen with traditional decision-making processes." Regulators want clear lines of accountability before that harm materializes.

Proactive organizations are already conducting AI risk assessments, testing for bias and fairness across protected categories, and establishing clear ownership structures for every AI system in production. These practices will form the baseline of compliance in virtually every jurisdiction within the next three years.

Sector-Specific AI Rules

General-purpose AI regulation is only part of the picture. Sector regulators, who understand domain-specific risks far better than technology ministries, are developing tailored requirements for their industries.

Financial services face the most advanced sector-specific AI rules, covering credit decisions, fraud detection, and algorithmic trading. Healthcare regulators are scrutinizing AI diagnostics and treatment recommendations. Employment law is catching up to address AI in hiring and performance management. Education authorities are beginning to examine automated assessment and student support systems.

The practical implication for organizations operating across multiple sectors is that compliance is not a single exercise. A company deploying AI in both lending and human resources will need to satisfy two distinct regulatory regimes, each with its own risk thresholds, documentation requirements, and oversight expectations.

Risk-Based Regulatory Frameworks

Rather than applying uniform requirements to all AI, regulators are adopting tiered approaches that calibrate obligations to risk levels. The EU AI Act, which the European Parliament adopted in March 2024, established the template that most frameworks now follow. At the top tier, certain AI uses are outright prohibited, including social scoring systems and subliminal manipulation techniques. High-risk AI, such as systems that affect fundamental rights or operate in safety-critical domains, faces extensive documentation, testing, and oversight requirements. Limited-risk AI triggers transparency obligations. Minimal-risk AI remains largely unregulated.

Singapore's Infocomm Media Development Authority (IMDA) has adopted a similar risk-proportionate philosophy in its AI Verify framework. Thailand's Digital Economy Promotion Agency (DEPA) and Malaysia's developing guidelines show the same trajectory. The convergence is notable: regardless of jurisdiction, the regulatory consensus is that not all AI carries equal risk, and regulatory resources should concentrate where potential harm is greatest.

For organizations, this means the most urgent governance investment should target AI systems that would clearly fall into the high-risk category under any of these frameworks. Getting those systems documented, tested, and auditable is the highest-return compliance activity available today.

Cross-Border Harmonization Efforts

AI is inherently cross-border, and regulatory fragmentation creates both compliance complexity and competitive distortions. Several international bodies are working to align standards. The OECD AI Principles, endorsed by over 40 countries, provide a foundational reference point. The ASEAN Digital Economy Framework is working toward regional coherence. Bilateral agreements on AI governance are multiplying.

Full harmonization remains distant, but the direction of travel is clear. Organizations that design their governance to meet the highest applicable standard across their operating jurisdictions will avoid the costly exercise of maintaining multiple parallel compliance frameworks.

AI Audit and Certification Frameworks

The final major trend is the emergence of third-party assessment and certification for AI systems. Voluntary certification programs are appearing across multiple jurisdictions. Standards bodies, including ISO and IEEE, are developing formal AI certification criteria. Audit requirements for high-risk AI are becoming embedded in draft regulations. Professional certifications for AI governance practitioners are gaining traction.

For organizations, the strategic question is whether to pursue voluntary certification now, before it becomes mandatory, and gain both the reputational benefit and the operational discipline that comes with preparing for external assessment. History suggests that early adopters of certification regimes, from ISO 27001 in information security to SOC 2 in cloud services, enjoyed meaningful competitive differentiation in their markets.

Regional Outlook: ASEAN Focus

Singapore

Singapore maintains the most developed AI governance framework in ASEAN. The Model AI Governance Framework, though voluntary, has become the de facto standard for organizations operating in the city-state. IMDA's AI Verify provides both a testing toolkit and a governance framework that organizations can adopt immediately. MAS has issued detailed guidance on AI in financial services that effectively functions as regulation for the banking and insurance sectors.

Over the next two to three years, Singapore is expected to move toward mandatory requirements for high-risk AI. Financial services AI regulation will almost certainly tighten further. AI Verify may transition from voluntary best practice to expected standard for specific AI use cases. Formal AI-specific legislation remains a possibility. Organizations operating in Singapore should align with the Model AI Governance Framework now, on the working assumption that adherence will shift from optional to obligatory.

Malaysia

Malaysia's AI governance framework is still taking shape. The National AI Roadmap provides strategic direction, and amendments to the Personal Data Protection Act (PDPA) are expected to address AI more directly. The Malaysia Digital Economy Corporation (MDEC) has issued initial guidance on responsible AI development. Sector regulators, particularly Bank Negara Malaysia, are expected to issue AI-specific guidance within the next 18 months.

The regulatory pressure in Malaysia is less immediate than in Singapore, but organizations should not mistake a longer runway for an absence of trajectory. Requirements are coming, and the two-to-three-year preparation window available today is an asset that will not last.

Thailand

Thailand is building its AI governance foundations on the base of its Personal Data Protection Act, which is now enacted and actively enforced. DEPA is leading AI governance initiatives alongside the National AI Strategy. Draft AI ethics guidelines have circulated, and a formal AI governance framework from DEPA is expected within the next 12 to 18 months.

For organizations operating in Thailand, compliance with the PDPA as it applies to AI-driven processing of personal data is the immediate priority. Beyond that, adopting Singapore's Model AI Governance Framework as a baseline provides a pragmatic hedge against whatever specific requirements Thailand ultimately enacts.

ASEAN Regional Developments

At the regional level, the ASEAN Digital Economy Framework and the ASEAN Digital Masterplan 2025 signal a clear intent to move toward shared principles for AI governance. Cross-border data flow frameworks that will directly affect AI operations are under active discussion. Mutual recognition frameworks for AI certification are a possibility, though likely a medium-term rather than near-term development.

Anticipated Regulatory Timeline (2026 to 2028)

PeriodSingaporeMalaysiaThailandGlobal
Early 2026Enhanced AI Verify adoption; MAS AI guidance updatesAI governance guidance expectedDEPA AI framework expectedEU AI Act high-risk obligations begin
Late 2026Potential mandatory requirements for high-risk AIPDPA AI amendments likelySector guidance emergingEU AI Act enforcement ramp-up
2027AI-specific legislation possibleAI legislation under developmentAI legislation possibleGlobal standards converging
2028Mature enforcement landscapeRegulatory framework maturingGrowing enforcement capacityInternational harmonization progress

This timeline represents informed projections based on current regulatory trajectories. Actual timing may vary, but the direction and general sequencing carry high confidence.

How to Prepare

Immediate Actions

The foundational step is visibility. Organizations need a complete inventory of every AI system in production or development, an honest assessment of the risk each system carries, basic policies governing acceptable AI use and data handling, and a clear assignment of governance ownership to a named individual or committee. Without this baseline, every subsequent compliance activity operates in the dark.

Alongside this internal work, organizations should establish a systematic process for monitoring regulatory developments across their operating jurisdictions, tracking sector regulator guidance, and engaging with industry associations contributing to AI policy discussions.

Near-Term Actions (Next 12 Months)

With baseline visibility established, the next phase is building formal governance infrastructure. This includes a documented AI governance framework with defined processes for risk assessment, approval, and ongoing monitoring. It includes building audit trails that will satisfy regulatory scrutiny. And it includes training programs for everyone involved in deploying, managing, or overseeing AI systems.

The highest priority within this phase is identifying which AI systems would qualify as "high-risk" under emerging frameworks and subjecting those systems to impact assessments and enhanced controls first.

Medium-Term Actions (12 to 24 Months)

As regulatory requirements crystallize, organizations should conduct gap analyses against expected regulations, develop remediation plans with realistic timelines and budgets, and consider pursuing relevant certifications where they provide competitive advantage.

The governance infrastructure built in the near term should be designed for adaptability. Modular documentation that can expand to meet new requirements, technical capabilities for explanation and audit that can be extended to additional systems, and review processes that can absorb new regulatory inputs without requiring fundamental redesign will all prove their value as the regulatory landscape continues to evolve.

Strategic Implications for Business Planning

AI regulation trends carry direct implications for business strategy and investment planning that executives should incorporate into their decision-making frameworks.

Compliance costs will become a significant operational expense for AI-intensive businesses. Organizations should budget 10 to 20 percent of their AI investment for governance and compliance activities, including documentation, auditing, training, and legal advisory. Regulatory requirements will increasingly differentiate between AI vendors: compliant vendors will command premium pricing while non-compliant vendors face market access restrictions, creating a compliance-driven vendor consolidation trend across the region.

Organizations that achieve compliance early will gain competitive advantages through faster market access in regulated jurisdictions, reduced insurance premiums as AI liability frameworks mature, and enhanced customer trust in markets where AI transparency becomes a purchasing criterion. The most effective strategic posture treats regulatory compliance not as a cost burden but as a competitive asset.

Practical Next Steps

Turning these insights into operational reality requires deliberate action. A cross-functional governance committee with clear decision-making authority and regular review cadences should serve as the organizational anchor for AI governance. Documenting current governance processes and identifying gaps against regulatory requirements in each operating market provides the roadmap. Standardized templates for governance reviews, approval workflows, and compliance documentation reduce friction and ensure consistency.

Quarterly governance assessments keep the framework current as both the organization's AI portfolio and the regulatory environment evolve. Internal capability building through targeted training programs for stakeholders across different business functions ensures that governance is not siloed within legal or compliance teams but embedded across the organization.

Effective governance structures require deliberate investment in organizational alignment, executive accountability, and transparent reporting mechanisms. Without these foundational elements, governance frameworks remain theoretical documents rather than living operational systems.

Disclaimer

Regulatory predictions are inherently uncertain. This article reflects current understanding as of publication and should not be relied upon as legal advice. Regulatory timing, scope, and requirements may differ from projections. Consult qualified legal counsel for jurisdiction-specific compliance guidance.

Common Questions

Expect EU AI Act full implementation, expanded ASEAN frameworks, sector-specific rules in financial services and healthcare, and requirements for explainability and transparency globally.

Build flexible compliance frameworks, implement good governance practices now, maintain documentation, and stay informed about developments in jurisdictions where you operate.

The EU takes a comprehensive regulatory approach, US focuses on sector-specific rules and enforcement, and ASEAN emphasizes principles-based governance with growing alignment.

References

  1. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  2. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  3. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. OECD Principles on Artificial Intelligence. OECD (2019). View source
  6. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  7. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Compliance & Regulation Solutions

Related Resources

Key terms:AI Regulation

INSIGHTS

Related reading

Talk to Us About AI Compliance & Regulation

We work with organizations across Southeast Asia on ai compliance & regulation programs. Let us know what you are working on.