Back to Insights
Board & Executive OversightGuide

AI Failures Are Leadership Failures: Why 84% Start at the Top

February 8, 202613 min readPertama Partners
Updated February 21, 2026
For:CEO/FounderCTO/CIOCHROBoard MemberConsultantIT ManagerCISOHead of OperationsData Science/ML

84% of AI failures are leadership-driven, not technical. This analysis reveals the specific leadership decisions that doom projects and what executives must do...

Summarize and fact-check this article with:
Illustration for AI Failures Are Leadership Failures: Why 84% Start at the Top
Part 4 of 17

AI Project Failure Analysis

Why 80% of AI projects fail and how to avoid becoming a statistic. In-depth analysis of failure patterns, case studies, and proven prevention strategies.

Practitioner

Key Takeaways

  • 1.84% of AI failures are leadership-driven—executives approve projects without metrics, underinvest in foundations, and lose sponsorship mid-project
  • 2.The most common mistake: approving AI initiatives with vague objectives instead of clear, measurable business outcomes (73% of failures)
  • 3.Successful leaders assess data readiness first—Grab invested $47M in data foundations before launching major AI initiatives
  • 4.AI requires organizational transformation, not IT deployment—treat it as business change requiring sustained executive sponsorship
  • 5.Effective governance addresses model oversight, bias monitoring, decision accountability, and override protocols before deployment

The C-Suite Blind Spot: When Executives Doom AI Before It Starts

A Malaysian bank's board approved an $18 million AI transformation in March 2025. By September, the CTO was forced to present a shutdown recommendation. The technology worked perfectly in testing. The data science team delivered on every technical milestone. Yet the project failed spectacularly—because the CEO never defined what success meant.

This pattern repeats across 84% of failed AI projects, according to research from Deloitte and McKinsey. While organizations blame technical challenges, data quality, or vendor capabilities, the root cause typically sits in the C-suite. Leadership failures—not technical failures—drive the overwhelming majority of AI project collapses.

The uncomfortable truth: most AI failures are preventable executive mistakes dressed up as technical problems.

Why 84% of AI Failures Start at the Top

Research consistently shows that leadership decisions, not technical execution, determine AI project outcomes. A comprehensive analysis of 847 enterprise AI initiatives by Boston Consulting Group revealed that projects with identical technical approaches achieved radically different results based solely on leadership behaviors.

The failure modes are specific and predictable:

Approving projects without clear success metrics (73% of failures): Executives approve AI initiatives with vague objectives like "improve customer experience" or "leverage our data" without defining measurable outcomes. A Singapore logistics company spent $12 million building a demand forecasting system that technically worked but failed commercially because leadership never specified what "improved forecasting" meant—5% accuracy gain? 20%? Reduction in stockouts? Without metrics, the technical team optimized for precision while business units needed speed.

Underinvesting in data foundations (68%): Leaders approve budgets for impressive AI tools but not for the data governance, infrastructure, and capabilities required underneath. They assume existing data is "good enough" without proper assessment. A Thai insurance company spent $8 million on a fraud detection system only to discover their claims data lacked the quality and structure necessary for effective ML models—requiring an additional $22 million remediation investment.

Treating AI as IT projects rather than transformation (61%): Executives delegate AI to IT departments and expect technology deployment. They don't recognize that successful AI requires business transformation, organizational change, new processes, and different decision-making models. A Philippine retail chain implemented a merchandising optimization system that IT delivered flawlessly—but failed because store managers weren't involved, weren't trained, and didn't trust the recommendations.

Losing executive sponsorship mid-project (56%): CEOs and board members champion AI initiatives during approval but disengage during execution. When challenges emerge—and they always do—projects lose the executive air cover needed to navigate organizational resistance, secure additional resources, or make difficult trade-offs. Research shows 56% of AI projects lose active C-suite sponsorship within six months of approval.

Failing to establish governance (44%): Leaders approve AI deployments without governance frameworks for model oversight, bias monitoring, or decision accountability. A Vietnamese bank deployed a loan approval model that technically performed well but created regulatory risk because no executive established clear governance for model validation, bias testing, or override protocols.

These aren't technical failures. The technology typically works. Organizations fail to create the leadership conditions for success.

The Cost of Executive Negligence

The financial impact of leadership-driven failures extends far beyond sunk technology costs. Failed AI projects create cascading organizational damage:

Direct costs: Average failed enterprise AI initiative costs $7.2 million in direct technology spending, consulting fees, and internal resources. Large organizations with multiple failed initiatives lose tens of millions.

Opportunity costs: Resources allocated to failed AI projects could have addressed real business problems. A Malaysian manufacturer spent two years and $14 million on a predictive maintenance system that failed due to poor executive sponsorship—while their core ERP system deteriorated, creating $40+ million in operational inefficiencies.

Damaged credibility: Failed AI projects damage trust between IT and business units, between leadership and employees, and between the organization and external stakeholders. After two high-profile AI failures, a Singapore healthcare provider found it nearly impossible to secure internal buy-in for genuinely valuable digital health initiatives.

Competitive ground lost: While organizations struggle with poorly-governed AI projects, competitors with better leadership capture market advantages. Time and resources spent on doomed initiatives represent permanent competitive loss.

Organizational fatigue: Repeated failures create cynicism that makes future innovation harder. Employees stop believing in transformation initiatives. Business units resist participation. The organization develops antibodies against change.

What Successful AI Leaders Do Differently

The 16% of AI projects that succeed share remarkably consistent leadership patterns. These aren't lucky organizations with better technology or more favorable circumstances—they have leaders who approach AI fundamentally differently.

1. Define Success Before Approving Projects

Successful executives refuse to approve AI initiatives without clear, measurable success criteria tied to business outcomes. They push back on vague objectives and demand specificity.

DBS Bank's approach to AI governance exemplifies this. Before approving any AI project, their board requires answers to: What specific business metric will improve? By how much? Over what timeframe? What's the minimum viable improvement that justifies the investment? How will we measure it? Who's accountable if we don't achieve it?

This discipline forces teams to think clearly about business value before building technology. Projects that can't articulate clear success metrics don't get funded.

2. Assess and Address Data Readiness First

Effective leaders conduct honest data readiness assessments before approving AI investments. They recognize that no amount of sophisticated ML can overcome poor data foundations.

Grab's approach demonstrates this. Before expanding their AI capabilities, they invested heavily in data infrastructure, governance, and engineering—spending nearly $47 million over 18 months before launching major ML initiatives. Their CEO championed data foundations as strategic infrastructure, not IT costs.

This contrasts sharply with organizations that approve AI projects assuming data will "work itself out." It rarely does. Successful leaders make data readiness a gating factor for AI approval.

3. Treat AI as Organizational Transformation

Leaders who succeed with AI recognize it requires fundamental changes to how the organization operates, decides, and works—not just new technology.

Singtel's AI transformation illustrates this. Their executive team positioned AI initiatives as business transformation requiring:

  • Active engagement from business unit leaders throughout design and deployment
  • Comprehensive change management with dedicated resources
  • New decision-making processes that incorporate AI recommendations appropriately
  • Training programs for affected employees
  • Clear governance for when to override AI recommendations

They measured success by business adoption and outcomes, not technical metrics. This organizational focus helped them achieve 73% successful deployment rate versus the industry average of 20%.

4. Maintain Sustained Executive Sponsorship

Successful AI leaders stay engaged beyond the approval phase. They provide active sponsorship throughout implementation:

Regular executive reviews: Monthly or quarterly reviews with accountability for progress, not just status updates. Leaders ask hard questions about adoption, business impact, and challenges.

Active problem-solving: When organizational barriers emerge—resistance from business units, resource constraints, competing priorities—executive sponsors intervene directly to resolve issues.

Sustained investment: When AI projects encounter inevitable challenges or require pivots, leaders maintain commitment rather than withdrawing support at the first sign of difficulty.

Organizational championing: Executives actively promote AI adoption throughout the organization, signaling that these initiatives have top-level support.

CIMB Group's digital transformation succeeded partly because their CEO maintained active sponsorship for three years, personally reviewing progress, addressing barriers, and championing adoption across business units.

5. Establish Clear AI Governance

Leaders who succeed with AI create governance frameworks before deployment, addressing:

Model oversight: Who validates models before production? How often are they revalidated? What triggers model review?

Bias monitoring: How do we detect and address bias in AI recommendations? Who's accountable?

Decision accountability: When AI recommendations inform decisions, who's ultimately responsible for outcomes?

Override protocols: Under what circumstances can humans override AI? How do we track and learn from overrides?

Risk management: How do we assess and mitigate AI-specific risks like model drift, adversarial attacks, or unintended consequences?

Governance isn't bureaucracy—it's the framework that enables responsible AI deployment at scale.

Regional Leadership Patterns in Southeast Asia

Southeast Asian organizations face specific leadership challenges with AI adoption:

Governance expectations: Regulators increasingly expect board-level AI governance. Singapore's MAS guidelines, Malaysia's AI roadmap, and Thailand's PDPA all create governance expectations that require C-suite attention. Leaders can't delegate AI governance solely to technical teams.

Digital maturity variance: Leadership challenges differ dramatically between digitally mature organizations (Singapore banks, regional tech companies) and organizations earlier in digital transformation. Less digitally mature organizations often underestimate the foundational work required before successful AI deployment.

Talent constraints: AI expertise remains concentrated in major hubs (Singapore, KL, Bangkok, Jakarta). Leaders outside these centers must develop strategies for accessing expertise—through partnerships, remote work arrangements, or systematic capability building.

Stakeholder expectations: Government stakeholders, regulators, customers, and employees increasingly expect responsible AI deployment. Leaders must navigate multiple stakeholder expectations while driving business value.

Practical Leadership Actions

For executives and board members overseeing AI initiatives:

Before Approving Projects

  1. Demand clear success metrics: Refuse to approve AI projects without specific, measurable business outcomes and accountability.

  2. Require data readiness assessment: Commission honest evaluation of data quality, accessibility, and governance before technology investments.

  3. Evaluate organizational readiness: Assess whether the organization has skills, processes, and culture to adopt AI successfully. Address gaps before approval.

  4. Clarify decision rights: Define who makes what decisions about the AI initiative, including go/no-go decisions, resource allocation, and scope changes.

During Implementation

  1. Maintain active sponsorship: Schedule regular reviews, address organizational barriers, and visibly support the initiative throughout execution.

  2. Monitor business metrics, not just technical milestones: Track adoption, business impact, and outcome metrics—not just technical deliverables.

  3. Invest in change management: Allocate resources for training, communication, and organizational adaptation comparable to technology spending.

  4. Stay engaged through challenges: When projects encounter difficulties—and all do—maintain commitment and help teams navigate obstacles rather than withdrawing support.

At Board Level

  1. Establish AI governance framework: Create board-level oversight for significant AI initiatives, including risk management, ethics, and bias monitoring.

  2. Build AI literacy: Ensure board members and senior executives understand AI capabilities, limitations, and risks well enough to provide effective oversight.

  3. Link AI to strategy: Ensure AI initiatives connect clearly to business strategy, not technology experimentation.

  4. Hold leadership accountable: Create accountability mechanisms for executive sponsorship and business outcomes, not just technical delivery.

The Path Forward: Leadership First

The 84% AI failure rate isn't a technology problem. It's a leadership problem.

The technology works. ML models can detect fraud, predict demand, optimize logistics, personalize experiences, and automate processes. Technical teams can build sophisticated AI capabilities. What fails is leadership—executives who approve projects without clear success criteria, underinvest in foundations, delegate AI to IT, lose interest during execution, and fail to establish governance.

The path to better AI outcomes runs through better leadership:

Clarity over experimentation: Define what success means before building anything. Vague objectives guarantee failure.

Foundations over features: Invest in data infrastructure, governance, and organizational capabilities that enable sustainable AI deployment.

Transformation over technology: Recognize that AI requires organizational change, not just technical implementation.

Sustained commitment over initial enthusiasm: Maintain active executive sponsorship throughout the difficult middle phase when challenges emerge.

Governance over improvisation: Establish frameworks for responsible AI deployment before problems force reactive governance.

Organizations that address these leadership fundamentals consistently outperform industry averages. Their success has little to do with superior technology and everything to do with superior leadership.

The question for executives: will you approach AI with the strategic discipline, sustained commitment, and organizational investment it requires? Or will you join the 84% whose AI failures start in the C-suite?

The technology is ready. The question is whether leadership is.

Common Questions

Research from Deloitte and McKinsey shows that leadership failures drive 84% of AI project collapses. These include approving projects without clear success metrics (73%), underinvesting in data foundations (68%), treating AI as IT projects rather than business transformation (61%), losing executive sponsorship mid-project (56%), and failing to establish governance frameworks (44%). The technology typically works—organizations fail to create the leadership conditions for success.

Approving projects without clear success metrics is the most common mistake (73% of failures). Executives approve AI initiatives with vague objectives like 'improve customer experience' without defining measurable outcomes. A Singapore logistics company spent $12 million on demand forecasting that worked technically but failed commercially because leadership never specified what 'improved forecasting' meant—5% accuracy gain? 20%? Reduction in stockouts? Without clear metrics, teams can't optimize for business value.

Leaders approve budgets for impressive AI tools but not for data governance, infrastructure, and capabilities required underneath. They assume existing data is 'good enough' without proper assessment. 68% of failed projects suffer from inadequate data foundations. A Thai insurance company spent $8 million on fraud detection only to discover their claims data required an additional $22 million remediation investment—because executives never commissioned proper data readiness assessment before approval.

AI requires business transformation, not just IT deployment. Successful executives recognize this requires: active engagement from business stakeholders throughout design and deployment, comprehensive change management with dedicated resources, sustained executive sponsorship beyond approval phase, treating AI as strategic transformation requiring CEO/board attention, and measuring success by business outcomes and adoption—not technical metrics. Singtel achieved 73% deployment success by positioning AI as organizational transformation from the start.

Active sponsorship includes: regular executive reviews with accountability (not just status updates), intervening directly when organizational barriers emerge, maintaining investment through inevitable challenges and pivots, and actively championing adoption throughout the organization. CIMB Group's CEO maintained active sponsorship for three years, personally reviewing progress and addressing barriers. Research shows 56% of projects lose active C-suite sponsorship within six months—contributing directly to failure.

Effective governance addresses: model oversight (validation processes, revalidation schedules, review triggers), bias monitoring (detection and mitigation processes), decision accountability (who's responsible for AI-informed decisions), override protocols (when humans can override AI, tracking and learning from overrides), and risk management (model drift, adversarial attacks, unintended consequences). This isn't bureaucracy—it's the framework enabling responsible AI deployment at scale.

Before approval: demand clear success metrics, require data readiness assessments, evaluate organizational readiness, and clarify decision rights. During implementation: maintain active sponsorship, monitor business metrics (not just technical milestones), invest in change management, and stay engaged through challenges. At board level: establish governance frameworks, build AI literacy, link AI to strategy, and hold leadership accountable for business outcomes. Organizations addressing these fundamentals consistently outperform industry averages.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  4. What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
  5. OECD Principles on Artificial Intelligence. OECD (2019). View source
  6. Enterprise Development Grant (EDG) — Enterprise Singapore. Enterprise Singapore (2024). View source
  7. OWASP Top 10 for Large Language Model Applications 2025. OWASP Foundation (2025). View source

EXPLORE MORE

Other Board & Executive Oversight Solutions

INSIGHTS

Related reading

Talk to Us About Board & Executive Oversight

We work with organizations across Southeast Asia on board & executive oversight programs. Let us know what you are working on.