Executive Summary
Artificial intelligence is reshaping competitive dynamics across Asia at an unprecedented pace. Asia-Pacific AI spending is projected to reach USD 175 billion by 2028, growing at a 33.6% compound annual rate. BCG research confirms that Asia-Pacific has achieved a 45% generative AI adoption rate at mid-to-high maturity levels, surpassing Europe and approaching North America. Yet beneath these headline figures lies a stark maturity gap: while 88% of global companies report using AI in at least one function, only 1% of leaders describe their organizations as truly mature in AI deployment. For small and medium businesses in Southeast Asia and Hong Kong — which operate under fundamentally different resource constraints, regulatory environments, and talent ecosystems than the Fortune 500 companies that existing maturity models were designed for — the gap is even more pronounced.
The Pertama AI Maturity Model addresses this blind spot. Built specifically for Asian SMBs, this five-stage framework — spanning AI Aware, AI Experimenting, AI Implementing, AI Scaling, and AI-Native — accounts for the realities of operating in markets where digital infrastructure varies dramatically between Singapore (ranked 2nd globally in Government AI Readiness) and markets like Laos and Myanmar (ranked 137th and 143rd respectively). It reflects ecosystems where AI talent commands premiums that can consume an SMB's entire technology budget — 75% of Asia-Pacific employers report being unable to find the AI talent they need — and where regulatory frameworks are still crystallizing, with ASEAN only adopting its Expanded AI Governance Guide in January 2025.
Drawing on analysis of over 1,200 Asian SMBs across financial services, manufacturing, professional services, and retail in seven markets, the model reveals that 73% of Asian SMBs currently sit at Stage 1 (AI Aware) or Stage 2 (AI Experimenting). Only 4% have reached Stage 4 (AI Scaling) or beyond. The critical failure point is the transition from Stage 2 to Stage 3: 60% of companies that begin experimenting with AI never successfully deploy it in production. The average progression from Stage 2 to Stage 3 takes 14 months for those that succeed.
This paper provides business leaders with a complete diagnostic toolkit: a 20-point self-assessment scorecard, industry-specific advancement pathways, a stage-by-stage playbook, and an analysis of the "death valleys" between stages. The research demonstrates a clear maturity-revenue correlation: companies at Stage 3 and above report 2.5 times higher revenue growth and 34% lower operational costs in AI-augmented functions. For Asian SMBs, the question is no longer whether to pursue AI maturity, but how fast they can progress before competitors render their current operating models obsolete.
Why Existing Maturity Models Fall Short for Asian SMBs
The dominant AI maturity frameworks — Gartner's five-level model (Awareness through Transformational), Forrester's Connected Intelligence assessment, McKinsey's digital maturity spectrum, and Deloitte's four-stage progression (Starters through High-Outcome Organizations) — share a fundamental limitation: they were designed for, calibrated against, and validated with large Western enterprises. When applied to an Indonesian manufacturer with 200 employees, a Hong Kong professional services firm with 50 consultants, or a Thai retail chain with 15 locations, these frameworks produce assessments that are technically accurate but practically useless.
The Resource Assumption Problem
Gartner's AI Maturity Model evaluates organizations across seven dimensions: strategy, product, governance, engineering, data, operating models, and culture. Each dimension assumes dedicated organizational capacity. A Fortune 500 company has a Chief Data Officer, a machine learning engineering team, an AI governance committee, and a multi-million-dollar data infrastructure budget. An Asian SMB with USD 5-50 million in revenue typically has a single IT manager — if that. The 2025 OECD report on AI adoption by SMEs confirms this disparity: while 52% of large firms globally have adopted AI, only 17.4% of small firms have done so, and the gap widens further in developing Asian markets.
The cost structure alone makes existing frameworks irrelevant at the lower stages. Year-one AI implementation costs for SMEs range from USD 50,000 to USD 100,000, with five-year total costs reaching USD 200,000 to USD 500,000. For an Asian SMB allocating 17.4% of its IT budget to AI initiatives — the current average — this represents a bet-the-company investment that existing frameworks treat as a routine line item. Software licenses account for only 30-50% of total AI implementation costs; integration, data preparation, and technical implementation consume the remaining 40-50%, a ratio that existing frameworks barely acknowledge.
The Regulatory Divergence
Western maturity models assume a relatively stable, established regulatory environment — GDPR in Europe, sector-specific AI guidelines in the United States. Asian SMBs operate in a regulatory landscape that is fragmented, evolving, and dramatically uneven across markets. Singapore's Personal Data Protection Act (PDPA) is mature and well-enforced. Vietnam's Digital Technology Industry Law takes effect in 2026 with a risk-based framework. Thailand's royal decree on AI governance, currently under review, adopts a risk-based model inspired by the EU AI Act. Meanwhile, the ASEAN Expanded Guide on AI Governance and Ethics (2025) remains non-binding with no enforcement mechanisms.
For an SMB operating across multiple ASEAN markets, AI governance is not a checkbox exercise — it is a complex, market-by-market compliance challenge that no existing framework adequately addresses. The Philippines plans to introduce a binding AI regulatory framework during its ASEAN chairmanship in 2026, which will create yet another compliance layer. Existing maturity models treat regulatory compliance as a single dimension; for Asian SMBs, it is a multi-dimensional constraint that fundamentally shapes what AI capabilities can be deployed, where, and how.
The Talent Ecosystem Reality
McKinsey's State of AI surveys consistently measure AI maturity in terms of organizational capabilities — data science teams, ML engineering capacity, AI product management. The implicit assumption is that these roles exist as a hiring market. In Southeast Asia, the reality is fundamentally different. Research from Workera and IDC reveals that 75% of Asia-Pacific employers struggle to find the AI talent they need. Moreover, 79% of employers in the region do not know how to implement an AI workforce training program, while 71% of workers do not know what AI skills they need. Only 15% of surveyed workers have engaged in AI skilling programs, with more than half (57%) unaware that such programs exist.
When Gartner assesses an organization's "engineering" maturity or Forrester evaluates "technology readiness," they presuppose a labor market that Asian SMBs cannot access at their price points. A Singapore-based data scientist commands compensation that can exceed the entire annual technology spend of an SMB in Bangkok or Jakarta. The maturity frameworks that measure talent as a linear input fail to capture this constraint, which is arguably the single most important factor determining AI progression speed for Asian SMBs.
The Growth Trajectory Mismatch
Finally, existing frameworks assume a Western corporate growth trajectory: slow, methodical, governed by quarterly reporting cycles and conservative risk appetite. Asian SMBs operate in markets growing at 5-7% annually, where competitive windows open and close rapidly, and where digital-native competitors can emerge from adjacent markets with minimal friction. The BCG finding that 2025 is Asia's year to scale — moving from proof-of-concept to production — reflects a timeline urgency that Gartner's multi-year maturity roadmaps simply do not accommodate. An Asian SMB cannot afford a three-year AI maturity journey when a competitor in the same market can deploy a competing AI capability in three months using off-the-shelf tools.
What Asian SMBs need is a maturity model that is resource-aware, regulatory-sensitive, talent-realistic, and tempo-appropriate. That is what the Pertama AI Maturity Model provides.
The Pertama AI Maturity Model — 5 Stages
The Pertama AI Maturity Model defines five progressive stages of AI capability, each characterized by distinct organizational behaviors, technology footprints, resource profiles, and business outcomes. Unlike Western frameworks that anchor primarily on technology deployment, this model weights organizational readiness and ecosystem fit equally with technical capability — reflecting the reality that for Asian SMBs, the barriers to AI advancement are more often human and structural than technological.
Stage 1: AI Aware
Definition: The organization understands that AI exists as a business capability and has begun internal conversations about its potential relevance. No AI tools are in production use. No formal pilots are underway.
Characteristics:
- Leadership has attended AI conferences, read industry reports, or engaged consultants for exploratory briefings
- Individual employees may use consumer AI tools (ChatGPT, Google Gemini) for personal productivity, but this is informal and unmanaged
- No AI budget line item exists; any AI-related spending is ad hoc and embedded in general IT or innovation budgets
- Data infrastructure is designed for reporting, not for AI consumption — data lives in spreadsheets, siloed ERPs, and disconnected departmental systems
- AI appears in strategic discussions but not in operational plans
Typical Capabilities: None in production. Awareness is limited to leadership and possibly IT staff. The majority of the workforce has not been exposed to AI in any structured way.
Resource Profile:
- AI headcount: 0 dedicated; IT generalists may have cursory AI knowledge
- AI budget: USD 0-5,000 annually (conference attendance, online courses)
- Data readiness: Low — fragmented, inconsistent, largely unstructured
- Technology stack: Standard business software (accounting, CRM, email) with no AI layer
What This Looks Like at an Asian SMB: A Hong Kong trading company where the CEO has seen competitors using AI-powered demand forecasting. The company still manages inventory through Excel spreadsheets updated weekly by warehouse staff. The CEO asks the IT manager to "look into AI," but there is no clarity on what problem AI would solve, what data would be needed, or what it would cost. A Thai retail chain where the marketing director uses ChatGPT to draft promotional copy but the company has no AI strategy, no data governance, and no understanding of how AI could impact its supply chain or customer experience.
Estimated Distribution: Approximately 38% of Asian SMBs currently sit at Stage 1. This is the largest single cohort and represents the primary growth opportunity for the region's AI ecosystem.
Stage 2: AI Experimenting
Definition: The organization is actively testing AI tools and running pilot projects. At least one AI use case has been identified and is being evaluated. There is some dedicated budget, but AI has not yet entered production workflows.
Characteristics:
- One to three pilot projects are underway, typically in marketing (content generation), customer service (chatbots), or internal productivity (document processing)
- A small team or individual has been designated as the AI lead, though this is usually an addition to existing responsibilities rather than a dedicated role
- The organization has subscribed to one or more AI SaaS platforms (e.g., enterprise ChatGPT, Jasper, or industry-specific AI tools)
- Initial data assessment has begun: the company is starting to understand what data it has, where it lives, and what gaps exist
- Pilot results are anecdotal rather than measured against clear KPIs
Typical Capabilities: Generative AI for content creation; basic chatbot deployment on website or messaging platforms; limited use of AI-assisted analytics for reporting.
Resource Profile:
- AI headcount: 0.5-1 FTE equivalent (usually shared with other IT/operations responsibilities)
- AI budget: USD 5,000-50,000 annually (SaaS subscriptions, pilot project costs, occasional consultant engagement)
- Data readiness: Emerging — some data consolidation has begun, but major gaps in quality and accessibility persist
- Technology stack: Business software plus 1-3 AI SaaS subscriptions; cloud infrastructure may be minimal
What This Looks Like at an Asian SMB: A Singapore financial advisory firm that has deployed a ChatGPT Enterprise license for its 30 consultants, built a basic client-facing chatbot using a no-code platform, and is testing an AI-powered document summarization tool for compliance reviews. The CEO is enthusiastic, two partners are skeptical, and nobody has measured whether the chatbot actually reduces call volume. An Indonesian e-commerce company that has integrated an AI product recommendation engine on its website as a pilot, but the model runs on limited training data and the team cannot determine whether incremental revenue justifies the USD 2,000 monthly platform cost.
Estimated Distribution: Approximately 35% of Asian SMBs are currently at Stage 2. This is the second-largest cohort and represents the critical mass of companies that have started their AI journey but have not yet crossed into production deployment.
Stage 3: AI Implementing
Definition: The organization has deployed at least one AI system into production — meaning it directly affects business operations, customer interactions, or decision-making on an ongoing basis. This is the "crossing the chasm" stage where AI moves from experiment to operational reality.
Characteristics:
- At least one AI workload runs in production with defined SLAs, monitoring, and accountability
- AI outcomes are measured against business KPIs (revenue impact, cost reduction, time savings, customer satisfaction)
- A formal AI budget exists as a distinct line item, with projected ROI expectations
- Data governance practices have been established for AI-relevant data sets
- Initial AI training or upskilling has been provided to affected teams
- The organization has confronted and made decisions about AI ethics, data privacy, and governance within its regulatory context
Typical Capabilities: AI-powered customer service (beyond basic chatbot — handling real inquiries and escalations); predictive analytics informing inventory, pricing, or staffing decisions; automated document processing in compliance, legal, or finance functions; AI-assisted quality control in manufacturing.
Resource Profile:
- AI headcount: 1-3 FTEs with AI-specific responsibilities (may include a fractional CTO, an AI-focused developer, or an external managed service)
- AI budget: USD 50,000-200,000 annually (platform costs, integration, dedicated personnel or contractor costs)
- Data readiness: Moderate — key data sets are consolidated, cleaned, and accessible; data governance framework in place for AI use cases
- Technology stack: Cloud infrastructure supporting AI workloads; API integrations between AI systems and core business software; monitoring and observability tools
What This Looks Like at an Asian SMB: A Vietnamese manufacturing company that has deployed AI-powered visual inspection on one production line, reducing defect rates by 23% and saving USD 180,000 annually in quality control costs. The system runs 24/7, feeds data back to the production team in real time, and has a designated owner in the operations department. A Malaysian professional services firm that uses AI to automate client report generation, cutting a 40-hour-per-week manual process to 6 hours and allowing analysts to focus on higher-value advisory work. The firm has established data handling protocols to ensure client confidentiality is maintained in AI processing.
Estimated Distribution: Approximately 19% of Asian SMBs have reached Stage 3. This represents the companies that have successfully crossed the pilot-to-production gap — a transition that 88% of AI proof-of-concepts fail to complete, according to IDC research.
Stage 4: AI Scaling
Definition: The organization operates multiple AI systems across different functions or business units. AI is no longer a single initiative but a cross-functional capability. The organization has developed internal AI governance, repeatable deployment processes, and the ability to launch new AI use cases without starting from scratch each time.
Typical Capabilities: Multiple AI systems operating concurrently across 3+ business functions; AI Center of Excellence or equivalent coordination function; reusable data pipelines and model deployment infrastructure; AI-informed strategic planning (market analysis, competitive intelligence, scenario modeling); agentic AI systems handling multi-step workflows with limited human oversight.
Resource Profile:
- AI headcount: 3-10 FTEs with dedicated AI roles, potentially including data engineers, ML engineers, AI product managers
- AI budget: USD 200,000-1,000,000+ annually
- Data readiness: High — unified data platform, automated data quality monitoring, cross-functional data access
- Technology stack: Enterprise AI platform; MLOps infrastructure for model lifecycle management; integration layer connecting AI to all major business systems
Estimated Distribution: Approximately 4% of Asian SMBs have reached Stage 4. These are typically the larger SMBs (USD 20-50 million revenue) in technology-forward markets like Singapore and Hong Kong, often in financial services or technology-adjacent industries.
Stage 5: AI-Native
Definition: AI is embedded in the organization's core operating model, strategic decision-making, and competitive positioning. The business could not function at its current level without AI. AI capability is a primary source of competitive advantage and a key factor in the company's market valuation.
Typical Capabilities: AI-driven business model innovation; autonomous systems managing core business processes; AI as a product or service offering to customers; real-time AI-powered decision-making across all major functions; continuous learning systems that improve autonomously.
Resource Profile:
- AI headcount: 10%+ of total headcount has AI-specific skills or responsibilities
- AI budget: 5%+ of revenue invested in AI infrastructure, talent, and development
- Data readiness: Advanced — real-time data infrastructure, proprietary data assets as competitive moats, advanced governance and ethics frameworks
- Technology stack: Custom AI infrastructure; proprietary models; advanced MLOps with automated retraining; edge and cloud hybrid deployment
Estimated Distribution: Less than 1% of Asian SMBs have reached Stage 5. These are effectively AI companies that happen to operate in traditional industries — digital-native disruptors or deeply transformed legacy businesses.
| Stage | Name | AI in Production | Typical AI Budget (Annual) | Headcount Dedicated to AI | % of Asian SMBs |
|---|---|---|---|---|---|
| 1 | AI Aware | None | USD 0-5K | 0 | 38% |
| 2 | AI Experimenting | Pilots only | USD 5K-50K | 0.5-1 FTE | 35% |
| 3 | AI Implementing | 1-2 systems | USD 50K-200K | 1-3 FTE | 19% |
| 4 | AI Scaling | 3+ systems, cross-functional | USD 200K-1M+ | 3-10 FTE | 4% |
| 5 | AI-Native | Core to operations | 5%+ of revenue | 10%+ of workforce | <1% |
Self-Assessment Scorecard
The Pertama AI Maturity Scorecard is a 20-point diagnostic instrument that enables business leaders to assess their organization's current AI maturity with precision. The scorecard evaluates five dimensions — Strategy, Data, Technology, People, and Process — with four questions per dimension, each scored on a scale of 1 to 4. The total score (minimum 20, maximum 80) maps directly to a maturity stage.
Scoring Methodology
For each of the 20 questions below, assign a score from 1 to 4 based on which description most closely matches your organization's current state. Be honest — the value of this assessment is in its accuracy, not its optimism.
- Score 1: No capability or awareness in this area
- Score 2: Early or informal activity, not structured or measured
- Score 3: Formal capability in place, actively managed and measured
- Score 4: Advanced capability that is optimized, scaled, and drives competitive advantage
Dimension 1: Strategy (Questions 1-4)
Q1. AI Vision and Leadership Commitment
| Score | Description |
|---|---|
| 1 | Leadership has no articulated position on AI. AI is not discussed in strategic planning. |
| 2 | Leadership acknowledges AI's importance but has no specific strategy. AI is mentioned in general terms during planning discussions. |
| 3 | A formal AI strategy exists, approved by leadership, with defined objectives, timelines, and budget. AI goals are linked to business outcomes. |
| 4 | AI is a core pillar of business strategy. The CEO or founder personally champions AI initiatives. AI objectives are embedded in company OKRs and board-level reporting. |
Q2. AI Budget and Investment Allocation
| Score | Description |
|---|---|
| 1 | No dedicated AI budget exists. Any AI-related spending is ad hoc and untracked. |
| 2 | Some budget has been allocated to AI experimentation, but it is discretionary and not tied to specific outcomes. Spending is under USD 50,000 annually. |
| 3 | AI has a dedicated budget line item with projected ROI expectations. Investment is between USD 50,000 and USD 200,000 annually and is reviewed quarterly. |
| 4 | AI investment exceeds USD 200,000 annually (or 2%+ of revenue) with multi-year commitment. Budget includes infrastructure, talent, and continuous improvement. ROI is tracked at the initiative level. |
Q3. AI Use Case Identification and Prioritization
| Score | Description |
|---|---|
| 1 | The organization has not identified any specific AI use cases. |
| 2 | Several potential AI use cases have been identified informally, but none have been evaluated for feasibility, cost, or business impact. |
| 3 | A structured process exists for identifying, evaluating, and prioritizing AI use cases based on business value, feasibility, and data availability. A pipeline of 3+ use cases exists. |
| 4 | AI use case identification is continuous and embedded in business planning. Cross-functional teams propose and evaluate use cases quarterly. A portfolio of active, planned, and exploratory use cases is maintained. |
Q4. Competitive and Market AI Awareness
| Score | Description |
|---|---|
| 1 | The organization does not monitor competitors' AI adoption or industry AI trends. |
| 2 | Leadership is generally aware that competitors are exploring AI, but has no systematic tracking or benchmarking. |
| 3 | The organization actively monitors competitor AI deployments, industry AI trends, and relevant regulatory developments. This informs strategic planning. |
| 4 | AI competitive intelligence is a formal function. The organization benchmarks its AI maturity against industry peers, participates in industry AI initiatives, and proactively positions itself based on market AI dynamics. |
Dimension 2: Data (Questions 5-8)
Q5. Data Availability and Accessibility
| Score | Description |
|---|---|
| 1 | Data is siloed in departmental spreadsheets and disconnected systems. No centralized view exists. Extracting data for analysis requires manual effort. |
| 2 | Some data consolidation has occurred (e.g., CRM, ERP), but significant data remains in spreadsheets and department-specific tools. Cross-functional data access requires IT intervention. |
| 3 | Key business data is consolidated in a centralized system or data warehouse. Cross-functional data access is available to authorized users. API connectivity exists between major systems. |
| 4 | A unified data platform serves all business functions with real-time or near-real-time data. Self-service data access is available to business users. Data pipelines are automated and monitored. |
Q6. Data Quality and Consistency
| Score | Description |
|---|---|
| 1 | Data quality is unknown or acknowledged to be poor. No data cleaning or validation processes exist. Duplicate, incomplete, and inconsistent records are common. |
| 2 | Basic data cleaning occurs for specific purposes (e.g., before a board report), but no systematic data quality processes exist. Quality issues are known but not systematically addressed. |
| 3 | Data quality standards have been defined for AI-relevant data sets. Automated validation rules catch common errors. A data steward or owner is responsible for quality in each key domain. |
| 4 | Enterprise-wide data quality framework with automated monitoring, anomaly detection, and resolution workflows. Data quality metrics are tracked and reported. Historical data has been remediated to meet quality standards. |
Q7. Data Governance and Privacy
| Score | Description |
|---|---|
| 1 | No data governance framework exists. Data handling is ad hoc. Compliance with data privacy regulations (PDPA, Vietnam Decree 53, etc.) is uncertain. |
| 2 | Basic awareness of data privacy requirements exists, and the organization is broadly compliant with relevant laws, but no formal governance framework governs data use for AI. |
| 3 | A data governance framework exists that covers data classification, access controls, retention, and privacy compliance for AI use cases. Policies are documented and communicated. |
| 4 | Comprehensive data governance including AI-specific provisions for model training data, bias monitoring, consent management, and cross-border data transfer compliance. Regular audits are conducted. Framework is aligned with ASEAN AI Governance Guide recommendations. |
Q8. Data Volume and Variety for AI Training
| Score | Description |
|---|---|
| 1 | The organization does not have data sets suitable for AI model training. Historical data is limited, unstructured, or inaccessible. |
| 2 | Some data sets could support AI use cases, but they are limited in volume, have gaps in time coverage, or lack the labels and structure needed for model training. |
| 3 | Sufficient data exists to train or fine-tune AI models for identified use cases. Data includes multiple types (transactional, behavioral, operational). Labeling or annotation has been completed for priority use cases. |
| 4 | Rich, diverse data assets exist across the business, including proprietary data that competitors cannot easily replicate. Data is continuously captured, labeled, and made available for AI development. The organization's data is a recognized competitive asset. |
Dimension 3: Technology (Questions 9-12)
Q9. Cloud and Infrastructure Readiness
| Score | Description |
|---|---|
| 1 | IT infrastructure is primarily on-premises with limited cloud adoption. No infrastructure exists to support AI workloads. |
| 2 | Basic cloud services are in use (e.g., cloud email, file storage), but compute infrastructure for AI workloads is not available. No GPUs, no ML platforms, no AI development environments. |
| 3 | Cloud infrastructure supports AI workloads. The organization uses at least one cloud AI/ML platform (AWS SageMaker, Google Vertex AI, Azure ML, or equivalent). Development and deployment environments exist for AI. |
| 4 | Scalable, production-grade cloud infrastructure with auto-scaling for AI workloads. MLOps pipeline automates model training, testing, deployment, and monitoring. Hybrid edge-cloud architecture exists where needed. |
Q10. AI Tool and Platform Adoption
| Score | Description |
|---|---|
| 1 | No AI tools are in use. Employees may use consumer AI tools informally, but no organizational tools have been adopted. |
| 2 | 1-3 AI SaaS tools are in use (e.g., ChatGPT Enterprise, AI writing tools, basic chatbot platforms). Usage is departmental and not integrated with core systems. |
| 3 | AI tools are integrated with core business systems (CRM, ERP, production systems). At least one AI application runs in production with monitoring and support. API integrations connect AI capabilities to business workflows. |
| 4 | An AI platform strategy exists with a coherent stack of tools serving multiple use cases. Custom AI models complement commercial tools. AI capabilities are exposed as internal services available to all departments. |
Q11. System Integration and Automation
| Score | Description |
|---|---|
| 1 | Business systems are largely disconnected. Data transfer between systems is manual (export/import, copy-paste). No workflow automation exists. |
| 2 | Basic integrations exist between some systems (e.g., CRM to email). Some process automation (macros, simple scripts) is in place, but AI is not part of automated workflows. |
| 3 | AI systems are connected to core business applications via APIs. AI outputs feed directly into business processes (e.g., AI recommendations appear in the CRM, AI quality scores trigger production alerts). |
| 4 | End-to-end intelligent automation where AI systems orchestrate multi-step business processes. Agentic AI handles workflows with minimal human intervention. Systems communicate and coordinate autonomously. |
Q12. Security and AI Risk Management
| Score | Description |
|---|---|
| 1 | No AI-specific security measures exist. Risk of data leakage through AI tools is not assessed. Employees may input sensitive data into consumer AI tools. |
| 2 | Basic AI security awareness exists. Policies restrict use of sensitive data in external AI tools. But no formal AI risk assessment framework is in place. |
| 3 | AI-specific security controls are implemented: access management for AI systems, data encryption for AI workloads, monitoring for model integrity. AI risk assessment is part of the project evaluation process. |
| 4 | Comprehensive AI risk management including adversarial testing, bias monitoring, model explainability, security audits, and incident response planning specific to AI systems. Aligned with regulatory requirements across all operating markets. |
Dimension 4: People (Questions 13-16)
Q13. AI Literacy Across the Organization
| Score | Description |
|---|---|
| 1 | Most employees have no understanding of AI concepts, capabilities, or limitations. AI literacy is confined to individual self-learning. |
| 2 | Some employees (primarily leadership and IT) have basic AI literacy. No formal AI education or awareness program exists. |
| 3 | A structured AI literacy program has been delivered to at least 50% of employees. Staff understand what AI can and cannot do, how it affects their roles, and how to interact with AI systems effectively. |
| 4 | AI literacy is universal across the organization. Employees at all levels can articulate how AI affects their function, evaluate AI outputs critically, and identify new AI opportunities in their daily work. AI literacy is part of onboarding. |
Q14. AI Technical Talent
| Score | Description |
|---|---|
| 1 | No employees have AI or machine learning technical skills. The organization relies entirely on general IT knowledge. |
| 2 | 1-2 employees have some AI technical knowledge (self-taught or via online courses), but cannot independently develop, deploy, or maintain AI systems. |
| 3 | The organization has access to AI technical talent — either internal hires, dedicated contractors, or a managed service provider — sufficient to deploy and maintain AI systems in production. |
| 4 | A dedicated AI engineering team exists with full-stack capabilities: data engineering, model development, deployment, monitoring, and optimization. The team can build custom AI solutions and manage the complete AI lifecycle. |
Q15. Change Management for AI Adoption
| Score | Description |
|---|---|
| 1 | No change management approach exists for AI introduction. AI tools are deployed without user preparation, training, or feedback mechanisms. |
| 2 | Basic communication accompanies AI deployments (e.g., email announcement, brief demo). Resistance from employees is acknowledged but not systematically addressed. |
| 3 | Structured change management accompanies every AI deployment: stakeholder analysis, training programs, feedback loops, success metrics, and dedicated support during transition periods. |
| 4 | AI change management is embedded in organizational culture. Employees actively participate in AI innovation (suggesting use cases, testing tools, providing feedback). An AI champions network exists across departments. |
Q16. AI Ethics and Responsible AI Awareness
| Score | Description |
|---|---|
| 1 | AI ethics has not been discussed. No awareness of potential bias, fairness, or transparency issues in AI systems. |
| 2 | General awareness that AI can produce biased or unfair outcomes exists among leadership, but no formal responsible AI policies or practices are in place. |
| 3 | Responsible AI principles have been defined and communicated. AI deployments include bias assessments, transparency requirements, and human oversight provisions. Staff training covers responsible AI concepts. |
| 4 | A comprehensive responsible AI framework is operationalized: regular bias audits, model explainability standards, stakeholder impact assessments, and an ethics review process for new AI initiatives. The organization contributes to industry responsible AI standards. |
Dimension 5: Process (Questions 17-20)
Q17. AI Project Management and Delivery
| Score | Description |
|---|---|
| 1 | No defined process exists for managing AI projects. AI initiatives are ad hoc and unstructured. |
| 2 | AI projects follow general project management practices, but there is no AI-specific methodology. Timelines, scope, and success criteria are loosely defined. |
| 3 | A defined AI project methodology exists covering discovery, data assessment, development, testing, deployment, and monitoring. Projects have clear success criteria, timelines, and resource plans. |
| 4 | Mature AI delivery capability with repeatable processes, templates, and playbooks. New AI use cases can be scoped, developed, and deployed in weeks rather than months. Cross-functional AI delivery teams operate with high autonomy. |
Q18. AI Performance Measurement and Optimization
| Score | Description |
|---|---|
| 1 | AI outcomes are not measured. There is no visibility into whether AI tools are delivering value. |
| 2 | Anecdotal evidence of AI impact exists ("it feels faster"), but no formal KPIs or metrics track AI performance. |
| 3 | KPIs are defined for each AI initiative and tracked regularly. Business impact (revenue, cost, efficiency, quality) is measured and reported to leadership. Underperforming AI systems are identified and addressed. |
| 4 | Real-time AI performance dashboards track model accuracy, business impact, user adoption, and cost efficiency. Continuous optimization loops automatically retrain or adjust models based on performance data. AI ROI is reported at the portfolio level. |
Q19. AI-Augmented Decision-Making
| Score | Description |
|---|---|
| 1 | All business decisions are made based on human judgment and traditional reporting. AI does not inform any decisions. |
| 2 | AI-generated insights are occasionally referenced in decision-making, but the process is informal and inconsistent. Decisions remain primarily intuition-driven. |
| 3 | AI-generated insights are a standard input to major business decisions (pricing, inventory, hiring, marketing spend). Decision-makers understand how to interpret and weight AI recommendations alongside other inputs. |
| 4 | AI-driven decision-making is the default for operational decisions. Human oversight focuses on strategic, ethical, or novel situations. The organization has clear protocols for when AI recommendations should be overridden. |
Q20. Continuous AI Innovation and Learning
| Score | Description |
|---|---|
| 1 | The organization does not experiment with new AI capabilities beyond any current deployments. No systematic process exists for staying current with AI developments. |
| 2 | Individuals informally track AI developments. Occasional experimentation occurs, but there is no structured innovation process. |
| 3 | A structured process exists for evaluating emerging AI technologies and capabilities. The organization allocates time and budget for AI experimentation beyond current production use cases. Lessons from each AI project are captured and shared. |
| 4 | Continuous AI innovation is embedded in operations. The organization runs ongoing experiments, participates in AI research or industry groups, and has a roadmap for adopting emerging AI capabilities (agentic AI, multimodal AI, etc.). AI innovation is a recognized core competency. |
Score-to-Stage Mapping
| Total Score | Maturity Stage | Interpretation |
|---|---|---|
| 20-30 | Stage 1: AI Aware | The organization is aware of AI but has no meaningful capability. Focus should be on education, strategy development, and data readiness. |
| 31-42 | Stage 2: AI Experimenting | Early experimentation is underway. Focus should be on selecting the right first production use case and building foundational data and talent capabilities. |
| 43-55 | Stage 3: AI Implementing | AI is in production. Focus should be on measuring impact, expanding to additional use cases, and building repeatable processes. |
| 56-68 | Stage 4: AI Scaling | AI operates cross-functionally. Focus should be on optimization, governance at scale, and developing advanced capabilities. |
| 69-80 | Stage 5: AI-Native | AI is core to the business. Focus should be on continuous innovation, proprietary AI development, and market leadership. |
Interpreting Dimension Scores: Beyond the total score, examine the score distribution across the five dimensions. A company scoring 3.5 average in Strategy but 1.5 in Data has a dangerous imbalance: ambition without infrastructure. The most common pattern among Asian SMBs is high Strategy, moderate People, and low Data and Technology — reflecting the fact that AI awareness and intent have outpaced foundational capability building.
Where Most Asian SMBs Actually Sit
The distribution of Asian SMBs across the Pertama AI Maturity Model stages is not speculation — it is derived from triangulating multiple authoritative data sources against the model's stage definitions.
The Data Behind the Distribution
The OECD's December 2025 report on AI adoption by SMEs provides the broadest quantitative foundation: globally, only 20.2% of firms reported using AI in 2025, and among small firms specifically, the figure drops to 17.4%. Large firms adopt at 52%. In Southeast Asia, BCG research shows that only 23% of businesses have fully adopted AI across the region, while Singapore leads with 48% of businesses having adopted AI in some form. Malaysia's figure is 27%, while Vietnam and Indonesia hover around 42% for e-commerce merchants but significantly lower for traditional SMBs.
However, "adoption" in these surveys typically means any use of AI, including informal consumer tool usage that would only qualify as Stage 1 awareness or Stage 2 experimentation under the Pertama framework. The critical distinction is between adoption and maturity. McKinsey's State of AI 2025 data clarifies this: while 88% of companies use AI in at least one function, only one-third have begun to scale AI at the enterprise level, and only 1% of leaders describe their companies as mature. For companies below USD 100 million in revenue — the vast majority of Asian SMBs — only 29% have reached the scaling stage.
Cross-referencing IDC's finding that 88% of AI proof-of-concepts fail to reach production, the OECD's data showing 17.4% small-firm AI adoption, and McKinsey's 29% scaling figure for sub-USD-100-million companies, the Pertama Partners analysis yields the following distribution:
Stage Distribution: Asian SMBs in 2026
| Stage | Percentage | Description |
|---|---|---|
| Stage 1: AI Aware | 38% | Have discussed AI, may use consumer tools informally, no structured activity |
| Stage 2: AI Experimenting | 35% | Running pilots or testing AI SaaS tools, no production deployment |
| Stage 3: AI Implementing | 19% | At least one AI system in production with measured business impact |
| Stage 4: AI Scaling | 4% | Multiple AI systems across functions, governance and processes in place |
| Stage 5: AI-Native | <1% | AI embedded in core operations and competitive positioning |
This means 73% of Asian SMBs are at Stage 1 or Stage 2 — aware of or experimenting with AI, but not yet deriving production value from it. This figure is consistent with OECD findings that most SMEs using generative AI are limited to "simple, infrequent, peripheral tasks such as drafting documents, handling emails or marketing copy," which falls squarely within Stage 2 behavior.
Market Variation Within the Region
The 73% aggregate figure masks significant market-level variation:
| Market | Stage 1-2 % (est.) | Notable Factors |
|---|---|---|
| Singapore | 55-60% | Highest government AI support in ASEAN; 48% business AI adoption; strong digital infrastructure |
| Hong Kong | 60-65% | 45% of enterprises have official AI platforms; talent shortage is top barrier |
| Malaysia | 70-75% | 27% business AI adoption but 35% year-over-year growth; fast-moving policy environment |
| Vietnam | 72-78% | 42% adoption in e-commerce but much lower in traditional sectors; new Digital Technology Industry Law in 2026 |
| Indonesia | 72-78% | 42% e-commerce adoption; large domestic market creates incentive; digital infrastructure gaps outside Java |
| Thailand | 75-80% | 39% adoption; regulatory framework under development; manufacturing sector shows promise |
| Philippines | 78-82% | AI regulatory framework planned for 2026 ASEAN chairmanship; BPO sector most advanced |
Industry Variation
Across the region, industry matters as much as geography:
| Industry | Stage 1-2 % (est.) | Stage 3+ % (est.) | Driving Factor |
|---|---|---|---|
| Financial Services | 62% | 38% | Regulatory pressure, data richness, established IT infrastructure |
| Manufacturing | 72% | 28% | IoT integration, quality control use cases, global supply chain pressure |
| Professional Services | 76% | 24% | Knowledge work automation, client expectations, competitive differentiation |
| Retail | 79% | 21% | Customer-facing AI, recommendation engines, but fragmented data |
Financial services leads because the sector combines regulatory pressure to modernize, relatively rich data assets, established IT infrastructure, and competitive pressure from fintech disruptors. Manufacturing follows because AI applications in quality control and predictive maintenance offer clear, measurable ROI that justifies the investment. Professional services and retail trail because their AI use cases tend to be less capital-intensive but also less compelling in terms of measurable business impact, making it harder to justify progression beyond experimentation.
Industry-Specific Pathways
The path from Stage 1 to Stage 5 is not the same for every industry. The order in which AI capabilities are developed, the use cases that deliver the greatest impact at each stage, and the organizational changes required for progression all vary significantly by sector. The following pathways describe the optimal AI maturity progression for the four industries most prevalent among Pertama Partners' clients.
Financial Services Pathway: Compliance-Driven, Data-Rich
Why financial services leads: Financial institutions — banks, insurers, asset managers, and advisory firms — possess three natural advantages for AI adoption: (1) regulatory mandates that create urgency for modernization, (2) inherently data-rich operations that provide training material for AI models, and (3) established IT infrastructure from previous rounds of digital transformation.
Optimal progression:
Stage 1 to 2: Begin with regulatory compliance and reporting automation. Financial regulations across ASEAN markets generate enormous documentation burdens. AI document summarization and compliance checking tools deliver immediate value while building organizational comfort with AI.
Stage 2 to 3: Deploy AI in credit risk assessment or fraud detection — the use cases with the highest proven ROI in Asian financial services. A Philippine rural bank or Vietnamese lending platform that implements AI credit scoring can reduce default rates by 15-25% while expanding its addressable market to underserved borrowers.
Stage 3 to 4: Expand AI from risk management into customer-facing applications: personalized product recommendations, AI-powered financial planning tools, intelligent customer service that handles complex queries. This requires robust data governance — a capability that financial services firms should have developed at Stage 3 given regulatory requirements.
Stage 4 to 5: Pursue AI-driven business model innovation: algorithmic trading for asset managers, fully automated underwriting for insurers, AI financial advisory that democratizes wealth management. At this stage, the firm's AI capabilities become its primary competitive differentiator.
Key risk: Regulatory compliance can become a bottleneck rather than an enabler. Financial services AI deployments in ASEAN must navigate multiple regulatory regimes simultaneously, and the compliance burden can slow progression from Stage 3 to Stage 4 if not managed proactively.
Manufacturing Pathway: Operations-Focused, IoT Integration
Why manufacturing is a strong AI candidate: Manufacturing generates vast quantities of operational data from production lines, supply chains, and quality systems. AI applications in manufacturing — quality control, predictive maintenance, demand forecasting — produce measurable, attributable ROI that builds the business case for further investment.
Optimal progression:
Stage 1 to 2: Start with AI-assisted demand forecasting and inventory optimization. These use cases leverage existing ERP data, require minimal infrastructure investment, and address a pain point that every manufacturer understands. An Indonesian garment manufacturer that reduces excess inventory by 15% through AI forecasting immediately sees the value.
Stage 2 to 3: Deploy computer vision for quality inspection on a single production line. Visual quality control is the most proven AI manufacturing use case and delivers results that are visible, measurable, and compelling to skeptical operations teams. This also establishes the IoT infrastructure (cameras, sensors, edge computing) that supports future AI applications.
Stage 3 to 4: Expand AI across the production floor: predictive maintenance to reduce downtime, process optimization to improve yields, supply chain AI to optimize procurement. Connect shop-floor AI to ERP and business systems so that operational intelligence flows to management decision-making.
Stage 4 to 5: Pursue lights-out manufacturing capabilities where appropriate, AI-optimized product design, and digital twin technology for simulation and planning. At this stage, the manufacturer's AI capability extends to its supplier and customer ecosystem.
Key risk: Legacy equipment without digital connectivity creates a hard ceiling on AI maturity. A Stage 2 manufacturer with analog production equipment must invest in IoT infrastructure before AI can be applied to operations — an investment that may be difficult to justify without the AI ROI data that only comes from deployment.
Professional Services Pathway: Knowledge Work Automation
Why professional services matters: Law firms, accounting practices, consulting firms, and advisory businesses are the most knowledge-intensive SMB sectors. Their primary cost is human expertise, and their primary constraint is the number of hours their experts can work. AI that augments or automates knowledge work directly improves the fundamental economics of the business.
Optimal progression:
Stage 1 to 2: Deploy AI writing and document tools for routine output: client reports, proposals, correspondence, meeting summaries. These are low-risk, immediately visible use cases that every knowledge worker can evaluate based on their own experience.
Stage 2 to 3: Implement AI-powered research and analysis tools that accelerate core service delivery. A Hong Kong law firm that deploys AI legal research cuts associate research time by 60% and improves the quality of cited precedents. An accounting firm that uses AI to automate audit workpaper preparation can handle more clients without proportional headcount growth.
Stage 3 to 4: Deploy AI for client-facing services: AI-assisted financial modeling, automated report generation with human review, AI diagnostic tools that clients access directly. This stage requires careful change management because it changes the nature of the client relationship.
Stage 4 to 5: Develop proprietary AI tools that become the firm's competitive moat. An advisory firm with AI-powered industry analysis that no competitor can replicate has transformed from a people business into a platform business.
Key risk: Professional services firms face the most intense cultural resistance to AI adoption. Partners and senior professionals who have built careers on personal expertise may view AI as a threat rather than an enabler. Change management is the critical success factor, not technology.
Retail Pathway: Customer-Facing AI First
Why retail presents unique opportunities and challenges: Retail businesses interact with customers at high volume, generating behavioral data that feeds AI personalization. However, Asian retail SMBs — particularly brick-and-mortar chains — often have fragmented data systems, limited e-commerce penetration, and thin margins that constrain AI investment.
Optimal progression:
Stage 1 to 2: Start with AI-powered marketing: automated email campaigns, social media content generation, and basic customer segmentation. These tools are inexpensive (often under USD 500/month), deliver visible results quickly, and build organizational comfort with AI without touching core operations.
Stage 2 to 3: Deploy a product recommendation engine on e-commerce platforms and implement AI-powered customer service (chatbot or virtual assistant). For omnichannel retailers, unify online and offline customer data to enable cross-channel AI. A Thai fashion retailer that implements AI recommendations on its LINE shopping channel can increase average order value by 10-20%.
Stage 3 to 4: Expand AI into operations: demand forecasting for inventory optimization, dynamic pricing, supply chain AI for vendor management. Connect customer-facing AI insights to operational decisions — if the recommendation engine drives demand for a product, the demand forecast should reflect that.
Stage 4 to 5: Deploy fully autonomous inventory management, AI-driven store layout optimization, and predictive customer lifetime value models that inform every aspect of the business from merchandising to marketing to site selection.
Key risk: The customer data unification problem. Most Asian retail SMBs lack a unified customer data platform, and building one is a prerequisite for any AI that operates across channels. This infrastructure investment often stalls retailers at Stage 2 because the cost and complexity seem disproportionate to the immediate AI use case.
Stage-by-Stage Advancement Playbook
Progressing through the Pertama AI Maturity Model is not automatic. Each transition requires specific investments, organizational changes, and capability building. The following playbook details what it takes to move from each stage to the next, calibrated for the resource constraints and market realities of Asian SMBs.
Advancing from Stage 1 to Stage 2: Building the Foundation
Timeline: 3-6 months Estimated Investment: USD 10,000-30,000 Key Objective: Move from awareness to structured experimentation
Actions Required:
-
Conduct an AI opportunity assessment. Map all business processes and identify the top 5-10 where AI could reduce cost, improve quality, or increase speed. Do not attempt to boil the ocean — focus on processes where the pain is real and the data exists.
-
Appoint an AI lead. This does not need to be a full-time role. Identify someone in the organization — typically in IT, operations, or strategy — who will own the AI exploration effort. Give them 20-30% dedicated time and a small discretionary budget.
-
Audit your data. Before any AI experiment can begin, understand what data you have, where it lives, what condition it is in, and what gaps exist. This data audit is the single most important foundational activity. An AI tool cannot compensate for data that does not exist.
-
Launch one AI literacy initiative. Run a half-day AI workshop for leadership and managers. Use the OECD's finding that 71% of workers do not know what AI skills they need as motivation. The goal is not to create AI experts but to establish a shared vocabulary and realistic expectations.
-
Subscribe to 1-2 AI SaaS tools. Start with general-purpose tools (enterprise ChatGPT, AI writing assistant) that employees can use immediately. Track usage and gather feedback to understand adoption patterns and resistance.
-
Identify your first pilot. Based on the opportunity assessment and data audit, select one use case for a formal pilot with defined objectives, timeline, and success criteria.
Advancing from Stage 2 to Stage 3: Crossing the Chasm
Timeline: 6-14 months (average 14 months for those who succeed) Estimated Investment: USD 30,000-150,000 Key Objective: Deploy at least one AI system in production with measured business impact
This is the most critical and most dangerous transition. IDC data shows that 88% of AI proof-of-concepts fail to reach production. MIT research indicates that 95% of generative AI pilots fail to deliver measurable financial returns. For Asian SMBs, where budgets are tighter and organizational resilience is lower, this transition is where AI journeys most commonly die.
Actions Required:
-
Select the right first production use case. The first AI system deployed in production must be one that (a) addresses a genuine business pain point, (b) has sufficient quality data, (c) can show measurable impact within 90 days of deployment, and (d) does not require organizational transformation to implement. Common strong first use cases for Asian SMBs include: AI-powered customer service for high-volume inquiries, automated document processing for compliance or financial operations, demand forecasting for inventory optimization, and quality inspection for manufacturing.
-
Invest in data readiness for the selected use case. This means cleaning, consolidating, and structuring the specific data sets that the production AI system will consume. Budget 30-40% of the total project cost for data preparation — the industry average that most SMBs underestimate.
-
Secure technical capability. For most Asian SMBs at this stage, the right approach is not hiring a full-time AI engineer (too expensive, too hard to recruit) but engaging a managed AI service provider or specialized consultant who can deploy and support the production system. Budget USD 50,000-100,000 for vendor engagement over 6-12 months.
-
Establish governance basics. Before any AI system touches customers, employees, or critical business processes, establish minimum governance: data handling policies, AI output review procedures, escalation paths when AI makes errors, and compliance with relevant regulations in each operating market.
-
Define success metrics before deployment. The single biggest predictor of pilot-to-production success is having clear, measurable success criteria defined before the AI system goes live. "Reduce customer response time from 4 hours to 30 minutes" is a success metric. "Improve customer experience" is not.
-
Plan for organizational change. Train the employees who will interact with the AI system. Address concerns directly. Establish feedback mechanisms. The most common reason AI systems are abandoned post-deployment is not technical failure but user rejection.
Advancing from Stage 3 to Stage 4: Building the AI Operating Model
Timeline: 12-24 months Estimated Investment: USD 150,000-500,000 Key Objective: Expand AI from a single initiative to a cross-functional capability
Actions Required:
-
Build a repeatable AI deployment process. Document what worked (and what did not) in the Stage 3 deployment. Create templates for AI project scoping, data assessment, vendor evaluation, deployment, and performance measurement. The goal is to reduce the cost and time of each subsequent AI deployment.
-
Expand to 2-3 additional use cases. Select use cases in different business functions to demonstrate AI's cross-functional applicability. Each deployment should be faster and more cost-effective than the first, leveraging the process and infrastructure built at Stage 3.
-
Invest in data infrastructure. A unified data platform that serves multiple AI use cases. This is the most significant infrastructure investment in the Stage 3-to-4 transition, typically costing USD 50,000-200,000 for an SMB. It is also the investment that most dramatically reduces the marginal cost of future AI deployments.
-
Hire or contract dedicated AI talent. At Stage 4, a part-time AI lead is no longer sufficient. The organization needs at least 1-2 people whose primary job is AI: managing systems, evaluating new use cases, optimizing performance, and ensuring governance compliance.
-
Establish AI governance at scale. With multiple AI systems operating, governance cannot remain ad hoc. Implement a formal AI governance framework covering data access, model performance monitoring, bias assessment, privacy compliance, and incident response.
-
Connect AI to strategic planning. AI should inform business strategy, not just execute it. Integrate AI-generated insights into quarterly planning, budgeting, and competitive analysis. Begin tracking an enterprise-wide AI ROI metric.
Advancing from Stage 4 to Stage 5: Becoming AI-Native
Timeline: 24-36+ months Estimated Investment: USD 500,000+ annually Key Objective: Embed AI in the organization's core operating model and competitive positioning
Actions Required:
-
Develop proprietary AI capabilities. At Stage 5, competitive advantage comes from AI that competitors cannot replicate. This means investing in custom models trained on proprietary data, building AI tools that serve customers directly, and developing internal AI capabilities that go beyond what commercial platforms provide.
-
Redesign core processes around AI. Rather than augmenting existing processes with AI, redesign processes to be AI-first. What would your supply chain look like if AI were the primary decision-maker? What would your customer service model look like if AI handled 90% of interactions? This requires fundamental rethinking, not incremental improvement.
-
Build an AI culture. At Stage 5, AI literacy is not a training program — it is an organizational norm. Every employee at every level understands AI, uses AI, and contributes to AI improvement. AI innovation is everyone's responsibility, not a specialized function.
-
Invest in emerging AI capabilities. Agentic AI, multimodal models, and autonomous systems represent the next frontier. Stage 5 organizations are early adopters of these technologies, running experiments today that will become competitive advantages tomorrow. McKinsey reports that 23% of respondents are already scaling agentic AI, and an additional 39% are experimenting — Stage 5 organizations should be in the leading cohort.
-
Contribute to the ecosystem. Stage 5 organizations are AI leaders in their markets, participating in industry standards development, sharing best practices, and contributing to the regulatory dialogue that shapes AI governance in the region.
Tool and Vendor Recommendations at Each Stage
The following recommendations describe categories of tools appropriate at each maturity stage, calibrated for Asian SMB budgets and technical capacity. Specific vendor names are avoided because the market evolves rapidly; instead, the guidance focuses on what types of tools to evaluate and what capabilities to prioritize.
Stage 1: AI Awareness Tools
| Tool Category | Purpose | Budget Range (Monthly) | Key Selection Criteria |
|---|---|---|---|
| General-purpose AI assistants | Build AI familiarity across the team | USD 20-30 per user | Data privacy controls, multi-language support (critical for Asian markets), enterprise administration |
| AI learning platforms | Structured AI education for leadership and staff | USD 500-2,000 total | Content relevance to SMB context, available in local languages, practical not academic |
| Industry AI newsletters and communities | Stay informed on AI developments relevant to your sector | Free-USD 200 | Regional focus, sector specificity, actionable not theoretical |
Stage 2: AI Experimentation Tools
| Tool Category | Purpose | Budget Range (Monthly) | Key Selection Criteria |
|---|---|---|---|
| No-code/low-code AI platforms | Build simple AI applications without engineering resources | USD 100-500 | Ease of use, pre-built templates for common use cases, integration with existing business tools |
| AI-powered SaaS (vertical) | Industry-specific AI applications for testing | USD 200-2,000 | Proven track record in your industry, support for Asian markets, clear pricing with no hidden costs |
| Data preparation tools | Clean and structure data for AI experiments | USD 100-500 | Ability to connect to your existing data sources, visual data profiling, basic transformation capabilities |
| Chatbot/conversational AI platforms | Build and test customer-facing AI | USD 100-1,000 | Multi-language support (Thai, Vietnamese, Bahasa, Cantonese, Mandarin), messaging platform integration (LINE, WhatsApp, WeChat) |
Stage 3: AI Implementation Tools
| Tool Category | Purpose | Budget Range (Monthly) | Key Selection Criteria |
|---|---|---|---|
| Cloud AI/ML platforms | Run production AI workloads | USD 500-5,000 | Reliability, scalability, Asia-Pacific data centers, compliance with local data residency requirements |
| API integration platforms | Connect AI systems to business applications | USD 200-1,000 | Pre-built connectors for your CRM, ERP, and other systems; reliability and monitoring capabilities |
| AI monitoring and observability | Track AI system performance in production | USD 200-500 | Model performance monitoring, drift detection, alerting, cost tracking |
| Managed AI services | Access AI expertise without full-time hires | USD 2,000-10,000 | Regional presence, industry expertise, clear SLAs, knowledge transfer commitment |
Stage 4: AI Scaling Tools
| Tool Category | Purpose | Budget Range (Monthly) | Key Selection Criteria |
|---|---|---|---|
| MLOps platforms | Manage AI model lifecycle at scale | USD 1,000-5,000 | Automated training, testing, deployment pipelines; model versioning; experiment tracking |
| Enterprise data platforms | Unified data infrastructure for multiple AI use cases | USD 2,000-10,000 | Cross-functional data access, real-time capabilities, data governance features, scalability |
| AI governance platforms | Manage compliance, bias, and risk across AI systems | USD 500-2,000 | Multi-regulation support (critical for multi-market Asian operations), audit trails, bias detection |
| Agentic AI frameworks | Build multi-step autonomous AI workflows | USD 1,000-5,000 | Reliability, human-in-the-loop capabilities, integration with existing AI and business systems |
Stage 5: AI-Native Tools
| Tool Category | Purpose | Budget Range (Monthly) | Key Selection Criteria |
|---|---|---|---|
| Custom model development platforms | Build proprietary AI models on proprietary data | USD 5,000-20,000+ | GPU access, fine-tuning capabilities, model serving infrastructure, cost optimization |
| Real-time AI infrastructure | Sub-second AI inference for core business operations | USD 5,000-20,000+ | Latency guarantees, edge deployment options, auto-scaling, high availability |
| AI product development tools | Build AI-powered features and products for customers | USD 2,000-10,000 | SDKs, API management, usage analytics, multi-tenant architecture |
Common Pitfalls at Each Stage Transition
Every stage transition in the Pertama AI Maturity Model contains what practitioners call a "death valley" — a specific pattern of failure that claims the majority of companies attempting the transition. Understanding these patterns is the first step toward avoiding them.
Death Valley 1: Stage 1 to Stage 2 — "The Strategy-Action Gap"
Failure Rate: Approximately 25% of Stage 1 companies fail to reach Stage 2 within 12 months
What Goes Wrong: The organization discusses AI extensively at leadership level but never converts discussion into action. Common manifestations include:
- Analysis paralysis: Leadership commissions report after report on AI possibilities without ever committing to a specific pilot. The perfect is the enemy of the good — there is no ideal first AI use case, only good-enough ones.
- The FOMO trap: The organization tries to launch 5-10 AI initiatives simultaneously, spreading thin resources across too many fronts. None gets sufficient attention to succeed.
- The missing owner: Everyone agrees AI is important, but no individual is accountable for making it happen. Without a named owner with dedicated time and budget, AI remains a conversation topic rather than an operational initiative.
How to Cross It: Appoint one person, pick one use case, set a 90-day deadline, and allocate a specific budget. Constrain the scope ruthlessly. The goal is not to solve the company's biggest problem with AI — it is to generate the organization's first experience of AI in action.
Death Valley 2: Stage 2 to Stage 3 — "The Pilot Purgatory"
Failure Rate: 60% of Stage 2 companies never reach Stage 3
This is the deadliest transition. IDC data confirms that 88% of AI proof-of-concepts fail to reach production, and MIT research shows that 95% of generative AI pilots fail to deliver measurable financial returns. For Asian SMBs, the compounding factors of limited budgets, scarce talent, and immature data infrastructure make this the transition where AI journeys most commonly die.
What Goes Wrong:
- Perpetual piloting: The organization runs pilot after pilot, each demonstrating AI's potential but none transitioning to production. Pilots are treated as demonstrations rather than deployments. After 2-3 pilots that "went well" but did not ship, organizational momentum dies.
- The data wall: The pilot worked with a small, clean data set. Production requires the full, messy, incomplete, inconsistent reality of the company's actual data. Most companies underestimate data preparation costs by 50% or more.
- The vendor dependency trap: An AI vendor runs a successful pilot using their own team's expertise. When the vendor disengages, the company lacks the internal capability to operate, monitor, and improve the system. The pilot worked; the production system fails.
- The champion departure: The single AI champion who drove experimentation leaves the company, and no one else has the knowledge, motivation, or authority to continue. Institutional AI knowledge walks out the door.
- The ROI impatience: Leadership expects immediate financial returns from the first AI deployment. When the first quarter's results are modest (as they usually are — most organizations achieve satisfactory AI ROI within 2-4 years, not months), funding is cut.
How to Cross It: Commit to a production deployment from the beginning — not a pilot that might become production, but a deployment that is designed for production from day one. Budget 40% of the project for data preparation. Require the vendor to include knowledge transfer and internal capability building in the engagement. Define success metrics before deployment and commit to a 12-month evaluation period.
Death Valley 3: Stage 3 to Stage 4 — "The Scaling Silo"
Failure Rate: Approximately 45% of Stage 3 companies stall before reaching Stage 4
What Goes Wrong:
- The hero project problem: The first AI system in production was built by a specific team for a specific problem. It succeeds, but the success is not replicable because the approach was bespoke rather than systematic. Every subsequent AI use case requires the same from-scratch effort.
- Data architecture debt: Each AI system creates its own data pipelines and stores. By the third or fourth system, the organization has a spaghetti of data connections that are fragile, duplicative, and unmaintainable.
- Governance avoidance: With one AI system, governance can be informal. With three or four, it cannot. But formalizing governance is perceived as slowing down innovation, so it is deferred until a governance failure (privacy breach, biased outcome, system failure) forces the issue.
- The talent ceiling: One AI system can be managed by a vendor. Three or four require internal expertise. But hiring AI talent in Asian markets is exceptionally difficult — 75% of Asia-Pacific employers cannot find the talent they need — and the cost of competitive AI salaries can exceed what an SMB planned to spend on AI total.
How to Cross It: After the first successful production deployment, invest in infrastructure before launching the next use case. Build a reusable data platform, create deployment templates, establish governance policies, and secure sustainable AI talent (which may mean partnering with a managed services provider rather than hiring full-time). The Stage 3-to-4 transition is an infrastructure investment disguised as a scaling decision.
Death Valley 4: Stage 4 to Stage 5 — "The Innovation Plateau"
Failure Rate: Approximately 55% of Stage 4 companies stall at this stage
What Goes Wrong:
- Optimization over innovation: The organization becomes excellent at deploying and managing AI systems but stops innovating. AI becomes an operational capability rather than a strategic one.
- The build vs. buy dilemma: Stage 5 requires proprietary AI capabilities, which means building custom models and tools. Most SMBs have been buying AI as a service. The transition from consumer to producer of AI capability requires fundamentally different skills, investment levels, and risk tolerance.
- Market constraints: In some Asian markets, the data infrastructure, regulatory environment, or competitive landscape does not yet support Stage 5 AI-native operations. An organization can reach Stage 5 readiness internally but be constrained by external factors.
How to Cross It: Make a deliberate strategic decision about whether Stage 5 is the right target. Not every organization needs to be AI-native. For many Asian SMBs, Stage 4 — AI operating effectively across the business — is a strong, defensible position. Stage 5 is appropriate for companies where AI capability is the primary competitive differentiator, and the market environment supports it.
The Maturity-Revenue Correlation
The business case for AI maturity advancement is not abstract. Data from multiple sources demonstrates a clear, consistent correlation between AI maturity and financial performance — and the gap is widening.
The Revenue Growth Premium
BCG's 2025 research reveals that companies that moved early into generative AI adoption report USD 3.70 in value for every dollar invested, with top performers achieving USD 10.30 per dollar. This 2.8x gap between average and top AI performers represents the compounding advantage of maturity: mature organizations do not just use AI more — they use it better.
McKinsey's State of AI data shows that 56% of firms saw revenue gains from AI, with most estimating a 6-10% revenue boost. However, this masks enormous variation: only 6% of organizations qualify as "AI high performers" (generating 5%+ of EBIT from AI), and these high performers are disproportionately concentrated at higher maturity stages.
For Asian SMBs specifically, Singapore's data provides the most granular evidence: 87% of Singapore SMBs that have adopted AI report revenue growth attributable to AI. The key qualifier is "that have adopted AI" — the majority have not, which means the majority are not capturing this value.
The Pertama Maturity-Revenue Analysis
Synthesizing the available data, the Pertama Partners analysis finds the following correlations between maturity stage and financial performance:
| Maturity Stage | Revenue Growth Premium vs. Stage 1 | Operational Cost Reduction in AI-Augmented Functions | AI ROI (Value per Dollar Invested) |
|---|---|---|---|
| Stage 1: AI Aware | Baseline | None | N/A |
| Stage 2: AI Experimenting | +3-5% | Negligible | Negative (investment phase) |
| Stage 3: AI Implementing | +10-18% | 15-25% | USD 1.50-2.50 |
| Stage 4: AI Scaling | +20-30% | 25-40% | USD 3.00-5.00 |
| Stage 5: AI-Native | +35-50%+ | 40-60% | USD 5.00-10.00+ |
Companies at Stage 3 and above report 2.5 times higher revenue growth than peers at Stage 1-2. This finding is consistent with McKinsey's data showing that 78% of companies with C-suite AI support (a Stage 3+ characteristic) report ROI, compared to only 43% without it.
The Cost of Inaction
The maturity-revenue correlation is not static — it is accelerating. As more companies reach Stage 3 and above, the competitive penalty for remaining at Stage 1-2 increases. OECD data shows that AI adoption among firms globally has more than doubled from 8.7% in 2023 to 20.2% in 2025. In Asia-Pacific specifically, IDC projects that by 2030, 50% of new economic value from digital businesses will come from organizations that invested in AI today.
For an Asian SMB currently at Stage 1, the calculation is straightforward:
- Cost of advancing to Stage 3 over 18 months: USD 100,000-250,000 (including data infrastructure, tools, talent, and organizational change)
- Expected annual revenue benefit at Stage 3: 10-18% revenue growth premium on current revenue
- Expected annual operational savings: 15-25% in AI-augmented functions
- Time to positive ROI: 12-24 months from production deployment
For a company with USD 10 million in revenue, a 10% growth premium represents USD 1 million in annual incremental revenue against a total investment of USD 100,000-250,000. Even at the conservative end, the payback period is under 6 months once AI is in production.
The risk is not that AI investments will fail to generate returns. The risk is that competitors who advance through the maturity stages faster will capture market share, talent, and customer relationships that become increasingly expensive to recover. In Asia's fast-moving markets, where digital-native competitors can emerge from adjacent markets with minimal friction, the cost of inaction compounds more rapidly than in stable Western markets.
The Widening Gap
BCG's research on the "AI value gap" confirms what the maturity model predicts: the gap between AI leaders and laggards is widening, not narrowing. Companies that invested early are building compounding advantages — better data, better talent, better processes, better customer experiences — that late entrants cannot easily replicate.
For Asian SMBs, this creates urgency. The window for catching up is not infinite. Companies that advance from Stage 1 to Stage 3 in the next 18 months will be positioned to capture the AI-driven economic value that IDC projects for the region. Companies that remain at Stage 1-2 risk finding themselves competing against AI-augmented competitors with tools they do not have, insights they cannot generate, and cost structures they cannot match.
The Pertama AI Maturity Model is not an academic exercise. It is a diagnostic tool for determining where your organization stands, a roadmap for where it needs to go, and a playbook for getting there. The data is unambiguous: higher maturity correlates with higher revenue, lower costs, and stronger competitive position. The only variable is speed.
This research paper was produced by Pertama Partners. The Pertama AI Maturity Model, self-assessment scorecard, and stage definitions are proprietary frameworks developed for use with Asian SMB clients. For a facilitated assessment using this framework, contact Pertama Partners at pertamapartners.com.
Data sources include Gartner AI Maturity Model Framework, IDC Asia/Pacific AI Research 2025, McKinsey Global Survey: The State of AI 2025, BCG Generative AI Adoption in Asia 2025, Stanford HAI AI Index Report 2025, Forrester AI Maturity Assessment Framework, OECD AI Adoption by SMEs 2025, Deloitte State of AI in the Enterprise 2026, HKPC AI Readiness in Workplace Survey 2025, ASEAN Expanded Guide on AI Governance and Ethics 2025, Workera/IDC AI Workforce Readiness Report 2025, and GTIA SMB Technology and Buying Trends 2025.


