Back to Insights
AI Readiness & StrategyGuide

Why AI Pilots Fail to Scale: The 95% Problem MIT Identified

February 8, 202614 min readPertama Partners
Updated February 21, 2026
For:IT ManagerCEO/FounderCTO/CIOCISOCHROCFOData Science/MLHead of Operations

MIT research shows 95% of GenAI pilots fail to reach production. This analysis reveals the pilot-to-production gap and proven strategies for scaling AI...

Summarize and fact-check this article with:
Illustration for Why AI Pilots Fail to Scale: The 95% Problem MIT Identified
Part 5 of 17

AI Project Failure Analysis

Why 80% of AI projects fail and how to avoid becoming a statistic. In-depth analysis of failure patterns, case studies, and proven prevention strategies.

Practitioner

Key Takeaways

  • 1.MIT research shows 95% of GenAI pilots fail to reach production due to infrastructure limits, cost explosions, governance gaps, integration complexity, organizational resistance, and model performance degradation
  • 2.Design production-ready pilots that test against actual production constraints from day one—use real data governance, integrate with production systems, simulate scale, and calculate true production costs
  • 3.Infrastructure that works for 50 pilot users often collapses under 5,000 production users—conduct stress testing at 10x-100x pilot scale to reveal capacity limits before production commitment
  • 4.Pilot costs of $10K monthly can explode to $500K at production scale, destroying business cases—calculate full production costs (compute, APIs, storage, support labor) during pilots, not after
  • 5.Establish explicit go/no-go decision criteria upfront and make disciplined scaling decisions based on production blockers revealed—don't let sunk cost fallacy drive failed deployments

The $14 Million Pilot That Never Launched

A regional Indonesian bank built a GenAI customer service chatbot that performed brilliantly in pilot. The bot answered 92% of queries accurately, delighted the 50 test users, and cost $8,000 monthly. Executives approved production rollout.

Then reality struck.

Scaling to 50,000 customer interactions daily revealed problems invisible in the pilot: API costs exploded to $180,000 monthly (destroying ROI), response latency increased from 2 seconds to 47 seconds (unacceptable for customers), the model hallucinated when encountering dialects not in curated training data, integration with the legacy CRM system failed under production load, and data governance frameworks couldn't handle real customer PII.

Total investment before cancellation: $14 million. Production usage: zero.

This story repeats constantly across Southeast Asia. MIT's 2025 GenAI Adoption Study found that 95% of generative AI pilots fail to reach production. Not because the technology doesn't work in pilots—it does. But because pilots succeed in simplified, controlled environments that bear little resemblance to production reality.

The pilot-to-production gap isn't a technical problem with an engineering solution. It's a strategic problem with organizational roots. Organizations approach pilots as proof-of-concept exercises that validate "does AI work?" when they should approach them as production-readiness tests that reveal "can we scale this?"

This article examines why AI pilots fail at scale, how Southeast Asian organizations encounter these challenges, and proven strategies to bridge the pilot-to-production gap.

The Fundamental Pilot Trap

Pilots are designed to succeed. That's the problem.

Organizations structure pilots to minimize risk and maximize learning:

  • Curated data: Clean, labeled, representative data selected specifically for testing
  • Limited scope: Small user base, narrow use cases, controlled scenarios
  • Dedicated resources: Assigned infrastructure, focused team attention
  • Forgiving stakeholders: Pilot participants understand this is experimental
  • Patient timeline: Generous time for troubleshooting and refinement

Production environments are designed to deliver value at scale:

  • Real-world data: Messy, incomplete, inconsistent data that reflects reality
  • Full scope: Thousands of users, diverse use cases, edge cases you didn't anticipate
  • Shared resources: Competing infrastructure demands, operational constraints
  • Demanding stakeholders: Users expect reliable, fast, accurate results—no excuses
  • Urgent timelines: Business value depends on rapid, consistent delivery

The gap between these environments kills 95% of pilots.

What succeeds in the cocoon of a pilot often collapses when exposed to production reality. Organizations discover problems too late—after investing millions, after committing to timelines, after announcing initiatives to stakeholders.

The solution isn't better pilots. It's different pilots designed from day one to surface production blockers early.

Why Pilots Fail at Scale: The Six Critical Failure Modes

1. Infrastructure Limitations (67% of Failures)

Pilots use infrastructure that works fine for 50 users but collapses under 5,000.

Compute capacity constraints: A pilot chatbot serving 50 concurrent users needs minimal compute. Production serving 5,000 concurrent users during peak hours requires 100x capacity. Organizations underestimate this scaling factor because pilot infrastructure "worked fine."

Latency degradation: Acceptable 3-second pilot response times become unacceptable 30-second production delays when shared infrastructure, network constraints, and database queries hit scale.

Database bottlenecks: Pilot data fits in memory. Production data requires distributed databases, caching strategies, and query optimization that didn't matter at pilot scale.

Model serving challenges: Serving a GenAI model to 50 pilot users works on CPU. Production might require GPU clusters, model quantization, or edge deployment—infrastructure complexity invisible in pilots.

Southeast Asian context: Infrastructure challenges compound in regions with variable internet quality, data sovereignty requirements (Indonesia mandates local data storage), and limited cloud region availability. A Singapore-based pilot won't reveal Jakarta production infrastructure realities.

Real example: A Malaysian e-commerce platform's product recommendation AI worked perfectly in pilot (500 users, <1s latency). Production rollout to 50,000 users revealed database query patterns that caused 15-second delays during peak shopping hours. The pilot infrastructure simply couldn't predict production query volumes.

Prevention strategy: Conduct stress testing during pilots. Simulate production load (10x, 50x, 100x pilot volume). Test infrastructure under concurrent users, peak demand, and degraded network conditions. Reveal infrastructure limits before production commitment.

2. Cost Explosions (58% of Failures)

Pilots operate at costs that seem reasonable but become prohibitive at scale.

Token/API cost multiplication: A GenAI pilot costing $10,000 monthly becomes $500,000 monthly at 50x scale. This destroys business cases built on pilot economics.

Compute cost underestimation: Pilot compute ($2,000/month) scales non-linearly. Production might require redundancy, auto-scaling, and disaster recovery infrastructure ($45,000/month) never needed in pilots.

Data storage and transfer: Pilot data storage costs ($500/month) become production data lakes with compliance requirements ($12,000/month).

Support and maintenance: Pilots run on volunteer effort. Production requires dedicated support teams, on-call rotation, and ongoing model maintenance—labor costs invisible in pilots.

Vendor lock-in exposure: Pilots use vendor-provided credits and pilot pricing. Production faces commercial rates, minimum commitments, and escalation clauses that make pilots economically misleading.

Southeast Asian context: USD-denominated AI vendor pricing (OpenAI, Anthropic, Cohere) creates currency exposure for Southeast Asian organizations. Pilot economics in stable currency conditions become production disasters when currency fluctuates 20%.

Real example: A Singapore healthcare startup's diagnostic AI pilot cost $8K monthly using OpenAI API. Production projections showed $240K monthly at full patient volume—economically unviable. The pilot successfully proved the AI worked but failed to reveal the business model didn't work.

Prevention strategy: Calculate full production costs during pilot phase. Include compute, API fees, data storage, bandwidth, support labor, compliance costs, and contingency (25%). Build business case on production economics, not pilot subsidies.

3. Data Governance Failures (54% of Failures)

Pilot data governance works for test data but fails for production sensitivity.

Privacy compliance gaps: Pilots use sanitized or synthetic data. Production handles real customer PII subject to PDPA (Singapore), PDPB (Malaysia), PDP Law (Indonesia), PDPA (Thailand)—regulations with penalties for violations.

Data quality at scale: Pilot data is curated and cleaned. Production data includes duplicates, nulls, inconsistent formats, and edge cases that break models trained on clean pilot data.

Access control complexity: Pilots grant broad access to small teams. Production requires role-based access control (RBAC), audit logging, and separation of duties across hundreds of employees.

Compliance evidence: Pilots don't need audit trails. Production in regulated industries (banking, healthcare, insurance) requires provenance tracking, model versioning, and decision auditability.

Cross-border data challenges: Pilots process data in single jurisdiction. Production across ASEAN markets must navigate data localization requirements, cross-border transfer restrictions, and varying consent frameworks.

Southeast Asian context: Regional privacy law fragmentation creates governance complexity invisible in single-market pilots. An Indonesia-based pilot won't reveal Singapore's stringent PDPA compliance requirements or Thailand's localization mandates.

Real example: A Philippine insurance company's claims AI pilot used historical data with PII redacted. Production deployment failed audit because real-time claims processing required accessing unredacted data, exposing PII to model logging—a violation of privacy commitments the pilot never tested.

Prevention strategy: Use production data governance frameworks during pilots. Test with real PII (under controlled conditions), implement production access controls, and validate compliance requirements early. Ensure pilots surface governance gaps when there's time to fix them.

4. Integration Complexity (49% of Failures)

Pilots integrate with test environments. Production integrates with 20-year-old legacy systems never designed for AI.

API compatibility issues: Pilot systems expose clean APIs. Production legacy systems use SOAP, mainframe protocols, or batch file transfers that require extensive middleware.

Data format mismatches: Pilots standardize on JSON or parquet. Production legacy data comes in XML, fixed-width files, Excel spreadsheets, or proprietary formats requiring complex ETL.

Transaction handling: Pilots process data asynchronously with eventual consistency. Production requires ACID transactions, rollback capabilities, and guaranteed delivery—integration patterns pilots don't test.

Performance dependencies: Pilot integrations tolerate occasional slowdowns. Production integrations become bottlenecks when AI latency combines with legacy system response times.

Change management friction: Pilots integrate with systems controlled by the pilot team. Production integrations require change requests, testing cycles, and approvals from legacy system owners protecting uptime SLAs.

Southeast Asian context: Regional enterprises often run highly customized legacy systems built by local integrators. Generic integration patterns that work in Western markets fail against uniquely configured regional ERP, CRM, and core banking platforms.

Real example: A Thai manufacturing company's predictive maintenance AI pilot integrated beautifully with a modern SCADA system. Production deployment across factories required integration with 15 different machine controller protocols, some from vendors no longer in business. Integration complexity killed the pilot.

Prevention strategy: Involve integration with actual production systems during pilot. Don't test against mock APIs—test against real legacy systems. Identify integration blockers early when solutions (middleware, data transformation, process changes) are still feasible.

5. Organizational Resistance at Scale (46% of Failures)

Pilot users are willing volunteers. Production users are reluctant conscripts.

Adoption resistance: Pilots recruit enthusiastic early adopters. Production forces AI on skeptical employees who didn't ask for it and don't want it.

Change management gaps: Pilots provide intensive hand-holding. Production expects thousands of users to self-onboard with minimal support.

Workflow disruption: Pilots accommodate AI limitations by adjusting workflows. Production users expect AI to fit existing workflows—and reject AI that requires workflow changes.

Trust deficits: Pilot users forgive occasional errors. Production users lose trust after a single mistake and revert to manual processes.

Political opposition: Pilots operate under executive protection. Production deployments face middle management resistance, union concerns, and departmental turf battles.

Southeast Asian context: Hierarchical organizational cultures create specific resistance patterns. C-suite pilots succeed because employees comply with executive sponsors. Production deployments without executive cover face passive resistance that doesn't surface in pilots.

Real example: A Vietnamese bank's loan processing AI pilot succeeded with 20 volunteer credit officers who loved the tool. Production rollout to 200 credit officers revealed intense resistance—officers saw AI as threatening their expertise and autonomy. Usage rates never exceeded 15%.

Prevention strategy: Pilot with representative user populations, not just enthusiasts. Include skeptics. Test change management strategies, training programs, and support systems during pilots. Measure adoption rates and user sentiment as key pilot success metrics.

6. Model Performance Degradation (43% of Failures)

Pilot model performance rarely translates to production.

Data drift: Models trained on historical pilot data degrade when production data distribution shifts (new products, seasonal patterns, market changes).

Edge case explosion: Pilots test common scenarios. Production encounters rare edge cases (unusual customer requests, system failures, input errors) that break model assumptions.

Adversarial inputs: Pilot users provide genuine inputs. Production users discover ways to game the system, whether intentionally (fraud) or accidentally (unusual data entry).

Feedback loop issues: Pilots operate in static mode. Production creates feedback loops (AI decisions influence future data) that can reinforce biases or create instability.

Model staleness: Pilots freeze models during testing. Production requires continuous model updates—operational capabilities pilots don't exercise.

Southeast Asian context: Regional linguistic diversity creates unique model degradation. A GenAI pilot tested in formal Bahasa Indonesia degrades when production users input Jakartan slang, regional dialects, or code-switching between languages.

Real example: A Singapore fintech's credit scoring AI achieved 89% accuracy in pilot using clean historical data. Production accuracy dropped to 62% within three months as data distribution shifted (economic conditions changed, customer demographics evolved). The pilot never tested model monitoring or retraining procedures.

Prevention strategy: Build model monitoring and retraining capabilities during pilots. Stress-test models with adversarial inputs, edge cases, and simulated data drift. Establish production model governance (who can update models, approval processes, rollback procedures) before scaling.

The Production-Ready Pilot Framework

Traditional pilots ask: "Does this AI work?"

Production-ready pilots ask: "Can we scale this AI?"

The framework shifts pilot design to surface production blockers early:

Phase 1: Production-Constraint Design (Before Pilot Starts)

Define production requirements upfront:

  • Scale: How many users, transactions, data volume?
  • Performance: Required latency, throughput, availability?
  • Cost: Maximum acceptable cost per transaction/user/query?
  • Governance: What data sensitivity, compliance, audit requirements?
  • Integration: Which systems must AI connect to?
  • Support: What support model, escalation paths, issue resolution SLAs?

Design pilot to test production constraints:

  • Use production data governance frameworks (even on test data)
  • Integrate with actual production systems (not mocks)
  • Simulate production scale (load testing, stress testing)
  • Calculate costs at production volume (not pilot subsidies)
  • Test production support processes (not informal troubleshooting)

Southeast Asian consideration: Include regulatory, linguistic, and infrastructure diversity in production requirements. Singapore pilot requirements should consider Thailand, Indonesia, Malaysia deployment constraints.

Phase 2: Pilot Execution (With Production Realism)

Test at production scale (even if simulated):

  • Load testing: 10x, 50x, 100x pilot user volume
  • Stress testing: Degraded infrastructure, network failures, concurrent demand
  • Endurance testing: Run for weeks to reveal memory leaks, resource exhaustion

Use production data (with appropriate controls):

  • Real customer data under production governance
  • Actual PII (with consent, controls, audit)
  • Production data quality issues (duplicates, nulls, formatting)

Engage production stakeholders:

  • Include skeptical users, not just enthusiasts
  • Involve legacy system owners in integration testing
  • Test production support processes (ticketing, escalation, fixes)

Measure production readiness metrics:

  • Cost per transaction at scale
  • Latency under production load
  • Integration complexity revealed
  • User adoption rates (not just satisfaction)
  • Model performance on real data (not curated data)

Phase 3: Go/No-Go Decision (Before Production Investment)

Based on pilot results, make explicit go/no-go decision:

Go criteria (all must be met):

  • Production costs support viable business case
  • Infrastructure can handle projected scale
  • Integration complexity is manageable
  • Data governance framework works at production sensitivity
  • User adoption meets minimum thresholds
  • Model performance is acceptable on production data

No-go criteria (any triggers halt):

  • Production costs destroy business case
  • Infrastructure limits prevent scaling
  • Integration complexity is insurmountable
  • Governance gaps create unacceptable risk
  • User resistance prevents adoption
  • Model performance degrades unacceptably

Conditional-go criteria (require fixes before scaling):

  • Addressable cost optimization opportunities
  • Infrastructure upgrades needed but feasible
  • Integration work required but scoped
  • Governance gaps identified with remediation plan
  • Change management investment required
  • Model improvement roadmap defined

The discipline of explicit go/no-go prevents the default "we've invested too much to stop now" bias that drives failed scaling attempts.

Case Study: Production-Ready Pilot Success

Company: Sea Group (Singapore-headquartered, operating across Southeast Asia)

Challenge: Scale GenAI customer support across Shopee e-commerce platform in 7 markets

Traditional pilot approach (what they avoided):

  • Test with 100 Singapore users on curated data
  • Run on vendor-provided credits
  • Integrate with mock systems
  • Declare success based on user satisfaction
  • Scale to production and encounter all six failure modes

Production-ready pilot approach (what they did):

Week 1-2: Production requirements definition

  • Defined target: 500K daily customer interactions across markets
  • Required latency: <3 seconds 95th percentile
  • Cost ceiling: $0.15 per interaction (business case threshold)
  • Compliance: PDPA, PDPB, PDP Law, data localization requirements
  • Integration: Zendesk, Salesforce, order management, payment systems
  • Support: 24/7 multi-lingual, <1 hour critical issue resolution

Week 3-8: Production-constrained pilot

  • Scale testing: Simulated 10K concurrent users (20x production peak)
  • Production data: Used real support tickets (PII redacted but structure intact)
  • Cost validation: Ran at pilot scale but calculated costs at production volume ($73K/month—within business case)
  • Multi-market testing: Tested in Singapore, Indonesia, Thailand simultaneously to reveal localization challenges early
  • Integration: Connected to actual production Zendesk and Salesforce instances (read-only mode)
  • User diversity: Included skeptical support agents, not just volunteers

Week 9-12: Production readiness fixes

  • Cost optimization: Model distillation reduced costs from $73K to $52K monthly
  • Latency improvement: Caching and edge deployment reduced 95th percentile to 2.1s
  • Governance: Implemented data access controls meeting all regional requirements
  • Integration: Built robust error handling for production system intermittency
  • Change management: Developed agent training program based on pilot feedback

Production results (6 months post-launch):

  • Scale: 480K daily interactions (96% of target)
  • Latency: 1.8s 95th percentile (exceeded requirement)
  • Cost: $48K monthly ($0.10 per interaction—below ceiling)
  • Compliance: Zero violations across 7 markets
  • Adoption: 89% of support agents actively using (vs. 30% industry average)
  • ROI: 3.7x in first year (vs. projected 2.1x)

Key lessons:

  1. Production-ready pilot revealed cost optimization needs before production commitment
  2. Multi-market testing surfaced localization challenges pilot could address
  3. Integration with real systems prevented production deployment surprises
  4. User diversity testing revealed change management requirements early
  5. Stress testing at 20x scale built confidence in production infrastructure

Sea's approach cost 40% more than traditional pilot ($280K vs. $200K) but prevented multi-million dollar production failures.

Practical Recommendations

For Organizations Planning AI Pilots

  1. Define production requirements before pilot starts: Scale, performance, cost, governance, integration
  2. Design pilots to test production constraints: Use production data governance, integrate with real systems, simulate scale
  3. Budget pilots at 30-40% of traditional pilot costs: Production-ready pilots cost more but prevent bigger failures
  4. Establish explicit go/no-go criteria upfront: Don't let sunk costs drive scaling decisions
  5. Include skeptical users in pilots: Enthusiast-only pilots hide adoption challenges
  6. Calculate costs at production scale during pilots: Pilot economics mislead
  7. Test integration with production systems: Mock APIs hide integration complexity

For Organizations with Struggling Pilots

  1. Audit pilots against six failure modes: Infrastructure, cost, governance, integration, adoption, model performance
  2. Calculate true production costs: Include all infrastructure, API fees, support labor, compliance costs
  3. Stress test at production scale: Simulate 10x, 50x, 100x load before committing
  4. Engage production system owners: Integration complexity emerges late
  5. Test with skeptical users: Measure actual adoption, not just satisfaction
  6. Build production monitoring: Model performance, cost, latency, adoption dashboards
  7. Make explicit go/no-go decision: Sunk cost fallacy drives bad scaling decisions

For Executives Approving AI Scaling

Ask these questions before approving production rollout:

  1. "What are the costs at full production scale?" (not pilot costs)
  2. "Did we stress test at 50-100x pilot load?" (not just pilot volume)
  3. "What's the adoption rate among skeptical users?" (not just enthusiasts)
  4. "Did we integrate with actual production systems?" (not mocks)
  5. "What governance gaps did the pilot reveal?" (and how did we address them?)
  6. "What production blockers remain unresolved?" (honest assessment)
  7. "What's our rollback plan if production fails?" (exit strategy)

If the team can't answer these confidently, delay production scaling until they can.

Conclusion: Pilots Aren't Practice—They're Production Preparation

The 95% pilot failure rate isn't inevitable. It's a consequence of treating pilots as proof-of-concept exercises instead of production preparation.

Traditional pilots ask: "Can we build this?" The answer is usually yes—modern AI capabilities are remarkable. But "can we build it?" is the wrong question.

Production-ready pilots ask: "Can we scale this?" This forces confronting infrastructure limits, cost explosions, governance gaps, integration complexity, organizational resistance, and model degradation while there's still time to address them.

The difference between a $200K successful pilot and a $14M failed production deployment is asking the hard questions early.

Southeast Asian organizations face distinct pilot-to-production challenges: regulatory fragmentation across markets, linguistic diversity, infrastructure variability, currency exposure, and legacy system complexity. These challenges compound the six universal failure modes.

But these challenges aren't insurmountable. Organizations that design production-ready pilots—testing at scale, using real data, integrating with production systems, calculating true costs, engaging diverse users, and making disciplined go/no-go decisions—achieve production success rates exceeding 70%.

The choice isn't between no pilots and traditional pilots. It's between misleading pilots that hide production blockers and production-ready pilots that surface them early.

Design pilots that reveal problems when you can still fix them. Your production success rate will reflect it.

Common Questions

Production-ready pilots test against actual production constraints from day one: they use production data governance frameworks (even on test data), integrate with real production systems (not mocks), simulate production scale through load testing (10x-100x pilot volume), calculate costs at production volume (not pilot subsidies), and include skeptical users (not just enthusiasts). Traditional pilots optimize for learning in controlled environments; production-ready pilots optimize for revealing production blockers early.

Production-ready pilots typically cost 30-40% more than traditional pilots ($280K vs $200K in the Sea Group example). This extra investment buys stress testing, production system integration, multi-market validation, and comprehensive cost modeling. While more expensive upfront, production-ready pilots prevent multi-million dollar production failures. The ROI of spending $80K more on pilots versus losing $14M in failed production deployment is clear.

Go criteria (all must be met): production costs support viable business case, infrastructure handles projected scale, integration complexity is manageable, governance framework works at production sensitivity, user adoption meets minimums, and model performance is acceptable on production data. No-go criteria (any triggers halt): production costs destroy business case, infrastructure limits prevent scaling, integration is insurmountable, governance gaps create unacceptable risk, user resistance prevents adoption, or model performance degrades unacceptably. Make this decision explicit—don't let sunk costs drive scaling.

GenAI pilots are uniquely vulnerable to cost explosions because token-based pricing multiplies linearly with usage while business value often doesn't. A pilot costing $10K monthly (1,000 users × 100 queries × $0.10/query) becomes $500K monthly at 50,000 users—destroying ROI. Plus: GenAI requires expensive compute (GPUs), API fees compound at scale, data storage grows with conversation history, and model fine-tuning costs weren't in pilot budgets. Always calculate production costs during pilots, not after commitment.

Regional challenges compound universal failure modes: regulatory fragmentation (PDPA, PDPB, PDP Law vary by market), linguistic diversity (formal vs. colloquial language, code-switching, dialects), infrastructure variability (Singapore connectivity vs. rural Indonesia), currency exposure (USD AI pricing in volatile forex markets), data localization mandates (Indonesia requires local storage), and highly customized legacy systems. A Singapore pilot won't reveal Indonesia production constraints—multi-market pilots are essential for regional scaling.

Not necessarily—production blockers revealed in pilots are opportunities to fix problems before they become failures. The pilot's purpose is surfacing blockers early when solutions are still feasible (cost optimization, infrastructure upgrades, governance fixes, integration middleware, change management investment). Make explicit go/no-go decisions: proceed if blockers are addressable, pause if fixes need more work, kill if blockers are insurmountable. The discipline prevents sunk cost fallacy from driving bad scaling decisions.

Organizational resistance at scale. Pilots recruit willing volunteers who forgive AI mistakes and adapt workflows. Production forces AI on thousands of skeptical employees who didn't ask for it, won't change workflows for it, and abandon it after the first mistake. Organizations discover adoption rates of 15-30% versus pilot rates of 80-90%. Prevention: include skeptical users in pilots, measure adoption (not just satisfaction), test change management strategies, and budget for comprehensive training and support at production scale.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  4. OECD Principles on Artificial Intelligence. OECD (2019). View source
  5. Enterprise Development Grant (EDG) — Enterprise Singapore. Enterprise Singapore (2024). View source
  6. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  7. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source

EXPLORE MORE

Other AI Readiness & Strategy Solutions

INSIGHTS

Related reading

Talk to Us About AI Readiness & Strategy

We work with organizations across Southeast Asia on ai readiness & strategy programs. Let us know what you are working on.