Back to Insights
AI Readiness & StrategyFramework

Enterprise agility: Strategic Framework

January 22, 202612 min readPertama Partners
Updated February 20, 2026AI-enriched content replacing Template A boilerplate
For:CTO/CIOCFOConsultantIT ManagerCEO/FounderCISOHead of OperationsProduct ManagerBoard Member

Comprehensive framework for enterprise agility covering strategy, implementation, and optimization across global markets.

Summarize and fact-check this article with:

Key Takeaways

  • 1.Implement a four-tier authority matrix that reclassifies 65% of AI decisions from executive-level to team or department-level, reducing average decision cycles from 73 days to under 15 days while maintaining appropriate governance controls
  • 2.Establish dynamic resource pools starting at 10% and scaling to 20-30% of AI budgets, with quarterly reallocation mechanisms based on emerging opportunities rather than locking resources into annual plans
  • 3.Build dual operating systems that maintain hierarchical governance for steady-state operations while creating protected network structures for strategic AI initiatives, requiring explicit CEO-level sponsorship to prevent organizational antibodies from forcing innovations back into bureaucratic processes
  • 4.Deploy strategic sensing infrastructure across five channels—competitive intelligence, regulatory horizon scanning, technology radar, customer signals, and ecosystem mapping—with explicit ownership and monthly weak-signal briefings to executives
  • 5.Target 40-50% planned failure rates for AI initiative portfolios as a health indicator, celebrating productive learning rather than only successes, and using rapid pivot-or-persevere reviews to accelerate strategic learning velocity

Introduction

The pace of AI adoption across Southeast Asia has exposed a structural weakness in many of the region's largest enterprises: the inability to move quickly without losing control. Enterprise agility, in the context of AI transformation, is not simply about rapid iteration. It is the organizational capacity to sense market shifts, reconfigure resources, and execute strategic pivots while maintaining operational stability. For companies across ASEAN, this capability has become urgent. Digital-native competitors are emerging from Singapore's fintech corridors. Regulators such as Bank Negara Malaysia and the Monetary Authority of Singapore are issuing transformation mandates. And competitive dynamics are compressing AI adoption timelines that might otherwise unfold over years into quarters.

The challenge is specific: how to build organizational responsiveness without sacrificing the governance, risk management, and cultural cohesion that define successful Southeast Asian companies. Western agility models prioritize speed above all else, but that orientation maps poorly onto the relationship-driven, consensus-oriented decision cultures prevalent across ASEAN markets. What follows is a framework designed for regional realities, one that creates mechanisms for faster strategic execution while respecting the institutional context in which these enterprises operate.

The Enterprise Agility Maturity Matrix

Enterprise agility for AI initiatives operates across four dimensions, each requiring different organizational capabilities:

DimensionLevel 1: ReactiveLevel 2: ResponsiveLevel 3: AdaptiveLevel 4: Anticipatory
Strategic SensingQuarterly market reviewsMonthly competitor monitoringReal-time signal detectionPredictive market modeling
Resource MobilityAnnual budget cyclesQuarterly reallocationDynamic resource poolsAutonomous team funding
Decision Velocity90+ day approval cycles30-60 day decisionsWeekly strategic reviewsDelegated authority frameworks
Learning IntegrationPost-project reviewsQuarterly retrospectivesContinuous feedback loopsEmbedded experimentation

Most Southeast Asian enterprises currently operate between Level 1 and Level 2. Singapore's DBS Bank represents one of the few regional organizations consistently operating at Level 3 and Level 4. DBS restructured around autonomous squads with pre-approved innovation budgets and weekly decision forums capable of redirecting resources based on emerging opportunities.

The gap between Level 2 and Level 3 is where the stakes are highest. At Level 2, an organization can respond to competitor moves. At Level 3, it can shape market dynamics. That transition is where AI initiatives shift from incremental improvements to genuine strategic differentiation.

Strategic Architecture: The Three-Horizon Agility Model

Effective enterprise agility demands simultaneous execution across three time horizons. Each requires a distinct governance model and its own success metrics.

Horizon 1: Operational Agility (0-12 months)

The objective of the first horizon is to maintain and optimize existing operations while embedding AI capabilities into core processes. Three mechanisms make this possible.

First, sprint-based governance replaces monthly steering committees with bi-weekly sprint reviews for AI initiatives under $500K. Second, pre-approved experimentation budgets allocate 10 to 15 percent of IT budgets to team-level experiments. The typical Southeast Asian enterprise currently allocates less than 3 percent. Third, rapid procurement pathways establish fast-track vendor approval for AI tools under specified thresholds, calibrated to regulatory comfort: $50K in Malaysia, $100K in Singapore.

A critical adaptation for Malaysian and Indonesian enterprises: regulatory audits remain frequent in these markets, and documentation requirements should be stronger than Western models suggest. The answer is not to eliminate governance but to create templated approval paths that accelerate decisions without sacrificing compliance.

Horizon 2: Strategic Agility (12-36 months)

The second horizon is where organizations build new capabilities and business models that leverage AI for competitive advantage. This requires a fundamentally different posture toward risk.

A venture portfolio approach treats AI initiatives as investment portfolios with planned failure rates. Between 40 and 50 percent of pilots should be designed to fail fast, generating strategic learning rather than sunk costs. Cross-functional strike teams with six-month mandates and executive air cover bypass standard processes to test high-potential opportunities. Quarterly "pivot or persevere" reviews, governed by pre-defined kill criteria and success thresholds, enforce the discipline to reallocate resources from underperforming bets.

For Thai and Indonesian conglomerates, existing venture arms provide a natural home for aggressive AI experiments. This structure creates organizational separation from core operations while maintaining strategic alignment, and it respects family ownership dynamics while enabling the kind of risk-taking that drives breakthrough outcomes.

Horizon 3: Transformational Agility (3-5 years)

The third horizon reshapes organizational identity and market position through AI-native business models. Scenario-based strategy develops multiple strategic pathways based on AI maturity trajectories, regulatory evolution, and competitive dynamics. Ecosystem orchestration builds partner networks that can be activated rapidly as strategies shift. And cultural rewiring implements incentive systems that reward strategic learning over plan adherence.

Vietnamese and Philippine enterprises should factor in talent mobility constraints when planning at this horizon. Technical talent concentration in metro areas limits rapid scaling, making partnerships and acquisitions more reliable paths to agility than purely organic development.

Decision Velocity Framework: Accelerating Strategic Choices

Decision speed is the primary bottleneck for enterprise agility in Southeast Asia. According to research by IMDA (Singapore's Infocomm Media Development Authority), regional enterprises take an average of 73 days to approve new AI initiatives, compared to 34 days for North American counterparts.

The Authority Matrix

Closing that gap begins with defining clear decision rights across four categories:

Decision TypeAuthority LevelMaximum TimelineRequired Approvals
Type A: Experimental (< $50K, no customer data)Team Lead3 daysDepartment head notification
Type B: Operational ($50K-$500K, internal only)Department Head10 daysFunctional VP + Legal review
Type C: Strategic ($500K-$2M, customer-facing)Executive Committee30 daysCFO + CRO + relevant regulators
Type D: Transformational (> $2M, core business impact)Board + CEO60 daysFull governance review

The critical finding: 65 percent of AI decisions in Southeast Asian enterprises are currently treated as Type C or D when they should be classified as Type A or B. Reclassifying decisions according to their actual risk profile can triple decision velocity without increasing organizational exposure.

The Pre-Commitment Protocol

Further friction can be removed by establishing pre-approved frameworks in three areas. Vendor pre-qualification maintains approved lists of AI vendors across categories, from cloud ML platforms to consulting partners to niche tool providers, with pre-negotiated terms. Use case templates create standardized business cases for common AI applications such as customer churn prediction, demand forecasting, and document processing, with pre-approved ROI assumptions. Risk thresholds define explicit boundaries for acceptable risk in experimentation, eliminating case-by-case risk assessments for routine decisions.

OCBC Bank in Singapore provides a compelling proof point. The bank established a "Fast Lane" approval process for AI tools meeting pre-defined security and compliance criteria, reducing vendor onboarding from 6 to 8 months down to 3 to 4 weeks.

Resource Reallocation Mechanisms

Agility is only as real as an organization's ability to redirect resources, including budget, talent, and executive attention, toward emerging opportunities. Most Southeast Asian enterprises lock 85 to 90 percent of resources into annual plans, leaving almost no capacity for strategic reallocation.

The Dynamic Resource Pool Model

The solution is a central resource pool managed at the executive level, reserving 20 to 30 percent of the AI and innovation budget for dynamic allocation. Resources are released quarterly based on emerging priorities and performance metrics. Teams submit lightweight business cases to compete for funding, creating a market-like mechanism for capital allocation.

For most Southeast Asian enterprises, reaching the 30 percent target requires a graduated approach. In Year 1, a 10 percent dynamic pool builds organizational confidence and establishes governance processes. Year 2 expands to 20 percent as teams demonstrate the capacity to absorb rapid funding. Year 3 reaches the 30 percent target with mature governance in place.

Talent Mobility Framework

Resource agility is not only financial. Talent flexibility often matters more.

Rotation protocols establish 6-to-12-month "tour of duty" assignments where high performers join strategic AI initiatives. Explicit return guarantees, ensuring the same level or higher upon completion, reduce the political resistance that otherwise kills mobility programs. Project-based compensation premiums of 10 to 20 percent above base pay incentivize participation.

In markets with strong seniority cultures, including Japanese-influenced Korean chaebols and traditional Malaysian government-linked companies, framing matters. Rotations positioned as development opportunities blessed by senior leadership gain traction where lateral moves would face resistance.

Strategic Sensing Infrastructure

Anticipatory agility requires sophisticated sensing mechanisms that detect weak signals before they become obvious trends. Five channels, each with explicit ownership assigned to specific roles rather than defaulting to the strategy team, constitute the foundation.

The Five Sensing Channels

Competitive intelligence encompasses monitoring AI patent filings in relevant jurisdictions (Singapore, the United States, and China), tracking talent movements between competitors through LinkedIn and regional networks, and analyzing competitor earnings calls and investor presentations for strategic signals.

Regulatory horizon scanning demands active relationships with key regulators including MAS, Bank Negara, and OJK in Indonesia. Participation in industry working groups shaping AI governance provides both influence and early warning. Singapore's regulatory proposals typically run 18 to 24 months ahead of regional adoption, making that market a reliable leading indicator.

A technology radar conducts quarterly reviews of emerging AI capabilities from major cloud providers, maintains attendance at key regional events such as the Singapore FinTech Festival and Money20/20 Asia, and cultivates relationships with research institutions including NUS, NTU, and SUTD in Singapore and Universitas Indonesia.

Customer signal detection mines customer service interactions for emerging needs and pain points, monitors social media sentiment and discussion themes, and conducts quarterly "voice of customer" sessions focused specifically on digital and AI expectations.

Ecosystem mapping tracks startup funding and pivot patterns in relevant sectors, monitors partnership announcements and ecosystem moves by technology giants, and analyzes supplier and partner capability evolution.

Monthly "weak signals" briefings for the executive team, highlighting three to five emerging patterns, translate sensing into strategic action.

Agility Operating Model: Governance Without Bureaucracy

The central tension is clear: agility requires decisiveness, but Southeast Asian enterprises operate in high-stakes environments where governance failures carry severe consequences. Regulatory penalties and reputational damage in relationship-driven markets make the cost of missteps extraordinarily high.

The Dual Operating System

The resolution lies in maintaining two parallel governance structures that operate independently day to day but interact through defined interfaces.

System 1, the hierarchy, governs steady-state operations. It maintains traditional approval chains and risk controls, detailed documentation and audit trails, and quarterly planning cycles within annual budgets. This system manages 70 to 80 percent of organizational activity.

System 2, the network, governs strategic initiatives. It operates through cross-functional teams with delegated decision authority, lightweight governance oriented around outcomes, and rapid experimentation and iteration cycles. This system manages 20 to 30 percent of activity, including all major AI initiatives.

The two systems connect through quarterly portfolio reviews, shared risk frameworks, and common talent pools. The single most important success factor: System 2 requires explicit executive protection. Without CEO-level sponsorship, organizational antibodies will inevitably force network initiatives back into hierarchical processes.

Risk Management for Agile Enterprises

Agility does not mean recklessness. The key is separating risks that require centralized control from those that can be safely delegated.

The Risk Classification Matrix

Category 1, non-negotiable controls, must remain centralized. These include data privacy and security, regulatory compliance, financial controls and reporting, and reputational risks involving customers or partners.

Category 2, managed risks, can be delegated within guardrails. Technology choices within approved parameters, vendor selection from pre-qualified lists, resource allocation within approved budgets, and timeline and scope decisions for approved initiatives all fall here.

Category 3, experimental risks, can be fully delegated. Proof-of-concept technology selections, internal process experiments, learning initiatives without customer impact, and partnership explorations governed by NDAs require no centralized approval.

Most Southeast Asian enterprises over-control Category 2 and Category 3 risks, creating bottlenecks that slow the organization without materially reducing its exposure.

Measurement Framework: Tracking Agility Progress

Effective agility metrics focus on leading indicators rather than lagging outcomes. The following core KPIs provide a diagnostic baseline:

MetricTarget (Level 3+)Current SEA Average
Average decision cycle time< 15 days73 days
Resource reallocation per quarter15-20% of portfolio5%
Time from idea to pilot< 30 days120+ days
Percentage of failed initiatives40-50%< 10%
Cross-functional team composition60%+ outside core function20%
Executive time on emerging opportunities30-40%10-15%

One metric deserves particular attention: the percentage of failed initiatives. A failure rate below 10 percent does not signal operational excellence. It signals that the organization is not experimenting aggressively enough. Higher failure rates indicate healthier experimentation portfolios. Southeast Asian enterprises need to celebrate productive failures, not just successes.

Strategic Learning Velocity

Beyond the core KPIs, organizations should track how quickly insights from experiments feed into strategic decisions. Four measures capture this: the elapsed time from pilot completion to strategic decision (expand, pivot, or kill), the number of strategic assumptions tested per quarter, the percentage of strategy reviews incorporating recent learning, and the speed at which best practices diffuse across the organization.

Cultural Enablers: Building the Agile Mindset

Frameworks fail without cultural support. Southeast Asian enterprises must address three specific cultural dynamics that shape how agility takes root.

Hierarchy and Empowerment Balance

Respect for authority remains strong across ASEAN markets. Rather than fighting this cultural reality, effective organizations work with it. Senior leaders explicitly delegate authority and protect team decisions from second-guessing. Sponsorship models allow executives to champion initiatives without controlling them. Town halls and internal communications repeatedly reinforce empowerment messages, building confidence that acting on delegated authority is not just permitted but expected.

Consensus and Speed Balance

Consensus-driven decision making ensures buy-in but slows execution. The resolution is to distinguish between "input," which is broadly gathered, and "approval," which is narrowly held. Time-boxing applies discipline: input is gathered for two weeks, then the designated leader decides. "Disagree and commit" protocols ensure that dissenting views are recorded and respected, but execution proceeds without delay.

Face-Saving and Experimentation Balance

Failure carries social costs in relationship-oriented cultures, and those costs can quietly kill experimentation programs. Three adjustments shift the dynamics. Framing experiments as "learning initiatives" rather than "pilots" removes the implied expectation of success. Celebrating "insights gained" replaces the binary of success and failure with a spectrum of value. And team-based accountability distributes responsibility, while senior leaders sharing their own failure stories normalizes the process of learning through experimentation.

Implementation Roadmap: 90-Day Sprints

Sprint 1 (Days 1-90): Foundation

The first two weeks focus on assessment: mapping current decision flows for AI initiatives, identifying bottlenecks and unnecessary governance layers, and surveying teams on agility barriers. Weeks three through six shift to design, defining the authority matrix and decision rights, establishing the dynamic resource pool at an initial 10 percent allocation, and assigning sensing channel ownership. Weeks seven through ten launch two to three initiatives using the new agile processes, testing fast-track approval pathways and documenting lessons learned. The final two weeks of the sprint refine frameworks based on pilot learnings and secure executive commitment for broader rollout.

Sprint 2 (Days 91-180): Scaling

The second sprint expands agile governance to 30 to 50 percent of the AI portfolio, increases the dynamic resource pool to 15 to 20 percent, and launches cross-functional strike teams for strategic initiatives. Bi-weekly sprint review cadences become standard operating procedure, and the organization begins tracking agility KPIs.

Sprint 3 (Days 181-270): Embedding

By the third sprint, agile frameworks extend beyond AI to other strategic initiatives. Talent rotation programs expand, strategic learning velocity metrics are implemented, and internal case studies document early wins. Cultural change programs begin in earnest.

Sprint 4 (Days 271-360): Optimization

The fourth sprint conducts a full annual agility assessment and benchmarks against regional leaders. The authority matrix is refined based on accumulated risk experience, the dynamic resource pool expands to 25 to 30 percent, and next-year enhancements are planned.

Regional Variations: Adapting to Market Context

Singapore

Singapore offers the highest agility potential in the region, driven by regulatory sophistication and talent density. Organizations here should focus on anticipatory capabilities at Level 4, leverage government programs such as AI Singapore and IMDA grants for ecosystem connections, and benchmark against global standards rather than regional peers.

Malaysia

Malaysian enterprises must balance agility with bumiputera policy requirements and the public accountability demands of government-linked companies. Governance documentation should be emphasized even as timelines accelerate. Multimedia Super Corridor status and associated tax incentives provide tools for attracting talent to agile teams, and partnerships with MDEC support capability building.

Indonesia

Indonesian enterprises face the challenge of navigating complex approval processes across archipelago operations. Establishing a Jakarta-based agile hub with national execution authority addresses geographic complexity. Longer timelines for regulatory engagement with OJK and Bank Indonesia must be built into plans. The conglomerate structures prevalent in the market can be leveraged to create protected innovation zones.

Thailand

Thai enterprises operate within established relationships and hierarchies that reward stability. Framing agility as evolution rather than revolution gains acceptance. Royal and government digitalization initiatives lend legitimacy to transformation efforts, and Special Economic Zones within the Eastern Economic Corridor provide dedicated experimentation space.

Vietnam

Vietnam's growing tech talent pool and startup ecosystem present significant opportunity. Enterprises must navigate state-owned enterprise approval processes, where pre-commitment protocols prove particularly valuable. Ho Chi Minh City's tech corridor provides access to talent and partnership opportunities, and organizations should plan for rapid scaling once regulatory approvals are secured.

Philippines

The Philippines benefits from strong English proficiency and familiarity with US business culture. Talent concentration in Metro Manila can be addressed through remote-first agile teams. The country's deep BPO industry connections offer ready-made implementation partnerships, while family conglomerate dynamics require protected innovation units that can operate with appropriate autonomy.

Conclusion: Agility as Competitive Necessity

Enterprise agility has shifted from a strategic aspiration to a survival requirement for Southeast Asian organizations pursuing AI transformation. The region's unique combination of rapid digitalization, evolving regulatory frameworks, and intensifying competition creates both urgency and complexity that cannot be addressed by Western playbooks alone.

This framework provides a structured path forward that respects Southeast Asian organizational realities while building the responsiveness required to compete with digital-native disruptors and global technology giants. The enterprises that master this balance will define the region's next decade of growth. Those that do not will find themselves perpetually reactive, executing yesterday's strategies in tomorrow's markets.

Common Questions

Most organizations can establish foundational agility capabilities within 90-180 days, but reaching Level 3 maturity (adaptive) typically requires 18-24 months. The timeline varies significantly by market context—Singapore-based enterprises with existing digital capabilities can move faster (12-18 months to Level 3), while traditional conglomerates in Indonesia or Malaysia may require 24-36 months due to more complex approval hierarchies and cultural change requirements. The key is starting with focused pilots on 2-3 AI initiatives using agile processes, demonstrating success, then expanding rather than attempting organization-wide transformation immediately. Quick wins in the first 90 days—such as reducing decision cycles for small experiments from 60+ days to under 10 days—build momentum and executive confidence for broader changes.

Start with 10% of your AI and innovation budget in a centrally-managed dynamic pool during year one, then expand to 20-30% as your organization develops the capability to rapidly absorb and deploy resources. This is significantly higher than the current SEA enterprise average of 3-5%. The phased approach is critical because most organizations lack the decision-making processes, governance frameworks, and cultural norms to effectively deploy large dynamic pools initially. Singapore's DBS Bank operates at approximately 30% dynamic allocation, while most Malaysian and Indonesian enterprises should target 15-20% as a mature-state goal. The dynamic pool should be released quarterly based on emerging opportunities and performance metrics, with lightweight business case requirements (1-2 pages maximum) to maintain speed.

The solution is risk reclassification, not risk elimination. Implement a four-tier authority matrix that distinguishes between experimental risks (fully delegated), managed risks (delegated with guardrails), and non-negotiable controls (centralized). Analysis shows that 65% of AI decisions in SEA enterprises are currently escalated to executive committees when they should be handled at team or department level. Create pre-approved frameworks for common scenarios: vendor pre-qualification lists, standardized use case templates with pre-approved ROI assumptions, and explicit risk thresholds for experimentation. Singapore's OCBC reduced AI tool approval times from 6-8 months to 3-4 weeks using this approach while actually strengthening compliance through better-designed guardrails. The key insight is that faster decisions on low-risk initiatives free up governance capacity for genuinely strategic choices that require careful consideration.

Three cultural dynamics create unique challenges: hierarchy and authority (respect for senior leadership can prevent delegation), consensus-driven decision making (broad buy-in slows execution), and face-saving norms (fear of public failure reduces experimentation). Rather than fighting these dynamics, successful frameworks work with them. Have senior leaders explicitly and publicly delegate authority and protect team decisions. Distinguish between gathering input (broadly) and granting approval (narrowly), using time-boxed consultation periods followed by designated leader decisions. Frame experiments as 'learning initiatives' rather than 'pilots' and celebrate insights gained rather than success/failure binaries. Vietnamese and Indonesian enterprises report that having CEOs share their own failure stories in town halls significantly accelerates cultural change. The goal isn't to eliminate these cultural characteristics but to create specific mechanisms and leadership behaviors that enable speed while respecting relationship-oriented values.

Singapore enterprises should target Level 3-4 maturity (adaptive to anticipatory) and benchmark against global standards, leveraging sophisticated regulatory frameworks, dense talent pools, and government support programs like AI Singapore. Decision cycles under 15 days and 25-30% dynamic resource pools are achievable. In contrast, Malaysia, Indonesia, and Thailand enterprises face additional complexity: navigating GLCs and bumiputera policies in Malaysia, archipelago operations and OJK engagement in Indonesia, or established hierarchy and relationship protocols in Thailand. These markets should target Level 2-3 maturity initially, with 15-20% dynamic pools and 30-day decision cycles representing strong performance. The framework principles remain consistent, but implementation timelines extend 6-12 months, governance documentation requirements increase, and cultural change programs require more intensive executive sponsorship. Vietnam and Philippines can move faster due to younger organizations and stronger startup ecosystems, targeting timelines between Singapore and traditional SEA markets.

Counterintuitively, failure rate is a critical indicator—healthy agility portfolios show 40-50% of initiatives being killed or pivoted, while most SEA enterprises report under 10% failures, indicating insufficient experimentation and risk-aversion. Other key metrics include: decision cycle time under 15 days (vs. 73-day SEA average), 15-20% of resources reallocated quarterly (vs. 5% typical), and time from idea to pilot under 30 days (vs. 120+ days common). Also track strategic learning velocity: how quickly insights from experiments influence strategy decisions, measured through executive time allocation to emerging opportunities (target 30-40% vs. 10-15% typical). The measurement framework should include both speed metrics (decision velocity, resource mobility) and learning metrics (experiment completion, insight integration). Avoid vanity metrics like number of AI projects or total budget allocated—these don't indicate agility. Focus instead on cycle times, reallocation rates, and learning integration speed.

Adopt the principles but adapt the implementation significantly. Western frameworks like SAFe, Spotify models, or traditional agile methodologies prioritize speed and autonomy in ways that conflict with Southeast Asian organizational realities: stronger hierarchy, consensus-oriented decisions, relationship-driven business culture, and different regulatory environments. This framework incorporates Western agility principles (rapid iteration, delegated authority, portfolio approaches) but adapts execution through mechanisms like dual operating systems (maintaining hierarchy for steady-state while creating network structures for innovation), explicit senior leader sponsorship protocols, team-based rather than individual accountability, and stronger governance documentation. Malaysian and Indonesian enterprises particularly need these adaptations given regulatory audit frequencies. Singapore enterprises can adopt Western models more directly given cultural similarity to Western business practices and regulatory sophistication. The key is customizing frameworks to your specific context rather than importing models wholesale or rejecting agility concepts entirely because Western implementations don't fit.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  4. Enterprise Development Grant (EDG) — Enterprise Singapore. Enterprise Singapore (2024). View source
  5. OECD Principles on Artificial Intelligence. OECD (2019). View source
  6. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  7. Training Subsidies for Employers — SkillsFuture for Business. SkillsFuture Singapore (2024). View source

EXPLORE MORE

Other AI Readiness & Strategy Solutions

INSIGHTS

Related reading

Talk to Us About AI Readiness & Strategy

We work with organizations across Southeast Asia on ai readiness & strategy programs. Let us know what you are working on.