Back to Insights
AI Readiness & StrategyPlaybook

AI Project Recovery Playbook: Rescuing Failing Initiatives

March 27, 202515 min readMichael Lansdowne Hauge
For:CTO/CIOConsultantCFOCEO/FounderCHROIT ManagerData Science/MLHead of OperationsProduct Manager

47% of struggling AI projects can be saved with structured recovery protocols. Learn the diagnostic framework, intervention strategies, and decision criteria for rescuing or killing failing AI initiatives.

Summarize and fact-check this article with:
Dashboard showing AI project trajectory shifting from red to green zones

Key Takeaways

  • 1.Nearly half of struggling AI projects are recoverable if organizations intervene within 6–9 months using a structured diagnostic and recovery process.
  • 2.A four-phase framework—rapid assessment, root cause analysis, recoverability scoring, and a 90-day recovery or kill decision—creates discipline and speed.
  • 3.Scope reduction to a few high-value use cases is the most reliable recovery lever, especially when combined with focused integration and adoption work.
  • 4.Objective, pre-agreed kill criteria prevent sunk cost bias and help leaders terminate unsalvageable projects before they become expensive zombie initiatives.
  • 5.Systematic recovery playbooks not only rescue viable projects but also accelerate the shutdown of failing ones, improving portfolio ROI and organizational learning.

Executive Summary: Research from BCG and McKinsey indicates that 47% of struggling AI projects can be successfully recovered through structured intervention, provided action is taken within the first six to nine months of recognizing failure signals. The window narrows rapidly. Projects that persist in struggle beyond twelve months see recovery success rates fall to just 18%. The majority of recovery failures trace back to five critical mistakes: delayed recognition of problems, treating symptoms rather than root causes, insufficient resource reallocation, failure to reset stakeholder expectations, and sunk cost thinking that prevents kill decisions. Organizations that employ systematic recovery playbooks save an average of $1.8 million per rescued project and abandon unsalvageable initiatives seven months faster, preventing $2.3 million in additional sunk costs. The imperative is clear: rapid, honest diagnosis followed by decisive action, whether that means recovery or graceful termination.

The $3.2 Million Recovery

A global logistics company's AI route optimization project was failing fourteen months into implementation. The initiative was 89% behind schedule, having projected eight months for delivery but consuming fourteen with no completion in sight. Costs had ballooned to 3.2 times the original budget, with $3.8 million spent against a $1.2 million projection. Driver adoption stood at a dismal 11%, far below the 80% target set for Month 6. Technical performance had reached only a 23% improvement versus the 45% goal. Behind closed doors, the executive sponsor was privately considering cancellation.

At Month 14, leadership made a pivotal decision. They brought in an external recovery consultant, commissioned a 30-day diagnostic and recovery plan, and set a firm executive committee go/no-go decision at Day 30.

What the Diagnostic Revealed

The consultant's assessment uncovered five interrelated root causes. First, the team had defined the wrong problem, optimizing individual routes rather than pursuing network-wide optimization. Second, GPS tracking data was 34% incomplete, rendering predictions unreliable at a fundamental level. Third, the AI system could not write back to the dispatch system, forcing manual data transfer that introduced delays and errors. Fourth, drivers viewed the AI as "management surveillance" rather than a helpful tool, generating active resistance. Fifth, the interface was desktop-only, making it functionally unusable for drivers who spent their working hours on the road.

The Recovery Plan

Armed with this diagnosis, the team designed a six-month aggressive intervention structured in three phases.

During the first two months, the focus was stabilization. The team narrowed scope from all 47 routes to just 3 high-value corridors. They rebuilt the data quality pipeline with automated GPS validation, constructed an API integration enabling dispatch system write-back, developed a mobile-first driver interface, and reframed all messaging around the concept of an "AI assistant" rather than a "monitoring system."

Months three and four centered on rebuilding trust. The team co-designed improvements with driver champions, held weekly driver feedback sessions, demonstrated visible rapid iteration on pain points, and introduced gamification and incentives to encourage adoption.

The final two months were devoted to proving value through an intensive support model on the three pilot routes. The team measured and communicated wins weekly, targeting 60% or greater adoption on pilot routes and a 30% or greater efficiency improvement.

Recovery Results

Six months after the intervention began, results validated the recovery decision. Adoption on pilot routes reached 67%, up from 11% across all routes. Efficiency improvement climbed to 31%, surpassing the revised target. The project achieved a positive ROI trajectory, and leadership approved expansion to 12 additional routes. Total cost reached $4.9 million, well above the original $1.2 million projection, but the project had been salvaged and was generating value.

The recovery economics told a compelling story. The recovery effort itself cost $1.1 million. The alternative, cancellation, would have meant writing off $3.8 million already spent and forgoing $2.4 million in projected annual value. Instead, the organization realized that $2.4 million in annual value with a payback period of just five months from re-launch.

This recovery succeeded because intervention happened at Month 14, not Month 24. The window for recovery closes quickly.

Diagnostic Framework: Is Recovery Viable?

Phase 1: Rapid Assessment (Week 1)

The first step in any recovery effort is rigorous symptom documentation. Teams should catalog all observable failure signals across six dimensions: schedule variance measured in months behind target, budget variance expressed as a percentage over budget, adoption metrics comparing actual figures against targets, technical performance across accuracy, speed, and reliability, user sentiment drawn from feedback, complaints, and workarounds, and business value realized versus projections.

Equally important is a round of confidential stakeholder interviews conducted with explicit psychological safety. The project team should be asked what is genuinely going wrong and what has not been discussed openly. Users should be asked directly why they are not engaging with the system and what would need to change. Sponsors should be asked to articulate their confidence level and whether they are ready to end the initiative. Vendors should be asked what they see that the internal team might be missing.

Finally, teams should conduct rigorous data analysis examining usage analytics to identify where users drop off, technical logs to pinpoint what is failing, cost tracking to understand where money is flowing, and value measurement to determine whether any value at all is being realized.

Phase 2: Root Cause Analysis (Week 2)

The Five Whys technique is essential here. Teams must resist the temptation to stop at symptoms and instead dig methodically to root causes.

Consider an illustrative example. The symptom is low user adoption. Asking why reveals that users say the system is too slow. Asking why again shows that API calls to the legacy system are timing out. The next why uncovers that the legacy system was never designed for real-time queries. Digging further reveals that the integration architecture assumed modern API capability. The fifth why exposes the true root cause: the technical assessment was conducted by a vendor who did not understand the organization's infrastructure. Inadequate technical due diligence before architecture decisions was the actual failure, not user reluctance.

Root causes in failing AI projects typically fall into five categories.

The first is problem definition failures, where the team is solving the wrong problem, building something misaligned with actual user needs, or operating without clear success criteria.

The second is technical execution issues, encompassing insufficient data quality, integration complexity exceeding expectations, performance falling short of requirements, and accumulated technical debt.

The third is organizational barriers, including weak or absent executive sponsorship, inadequate change management, unaddressed user resistance, and competing priorities that starve the project of attention.

The fourth is resource constraints, where the team lacks necessary skills, the budget is insufficient for the scope, the timeline is unrealistic, or key dependencies are unavailable.

The fifth is external factors such as overstated vendor capability, shifts in market or business context, changed regulatory requirements, or an evolved technology landscape that has rendered the original approach obsolete.

Phase 3: Recoverability Assessment (Week 3)

At this stage, teams should apply a Recovery Viability Scorecard, rating the project from 1 to 5 (with 5 being most favorable) across twenty criteria grouped into four dimensions.

Technical viability examines whether the core technology works even if not yet at scale, whether data quality issues are fixable, whether integration challenges have identifiable solutions, whether performance can reach a minimum viable threshold, and whether the technical team possesses the necessary skills.

Business viability assesses whether ROI remains achievable with a revised scope and timeline, whether the business case fundamentals remain sound, whether users still want the solution if executed properly, whether the market and business context have not materially changed, and whether clearly superior alternatives have not emerged.

Organizational viability evaluates whether the executive sponsor remains committed, whether team morale can be rebuilt, whether the organization retains sufficient change capacity, whether political support exists for a recovery effort, and whether budget is available for the recovery investment.

Time viability considers whether the recovery timeframe is acceptable to stakeholders, whether the competitive window remains open, whether regulatory and compliance deadlines are still achievable, whether key dependencies remain available, and whether the intervention is occurring within twelve months of first recognizing the problem.

The scoring framework provides clear guidance. A total between 75 and 100 points indicates high recovery probability with a 70 to 80% success rate, and recovery is recommended. A score of 50 to 74 points suggests moderate recovery probability at 40 to 50% success, viable if critical fixes are achievable. A score of 25 to 49 points signals low recovery probability at 15 to 25% success, warranting a recommendation to kill the project or pursue a radical pivot. A score of 0 to 24 points means recovery is not viable, with less than a 10% success rate, and immediate termination is the appropriate course.

Phase 4: Recovery Plan or Kill Decision (Week 4)

If the assessment supports proceeding with recovery, the team should develop a 90-day intensive recovery plan built around six elements. First, specific actions to remediate each identified root cause. Second, scope adjustment that clearly defines what gets cut, what stays, and what is essential. Third, resource reallocation covering team changes, budget additions, and vendor adjustments. Fourth, a realistic timeline reset with milestones that include adequate buffer. Fifth, a stakeholder reset that establishes new expectations and a communication plan. Sixth, and critically, explicit kill criteria defining the metrics that would trigger an abort if recovery is not working.

If the assessment favors termination, the team should develop a graceful termination plan. This plan encompasses honest stakeholder communication with lessons learned, a user transition plan for reverting to previous processes or implementing an alternative solution, contract wind-down including vendor termination and asset disposition, team transition covering reassignment and morale management, knowledge capture documenting failures to prevent repetition, and financial close-out addressing final accounting and write-offs.

Recovery Intervention Strategies

Strategy 1: Scope Reduction ("Narrow and Deepen")

This strategy applies when a project is attempting to do too much and spreading resources thin. The approach is straightforward: identify one to three highest-value use cases, cut everything else regardless of how tempting peripheral features may be, focus 100% of resources on proving value within the narrowed scope, and plan expansion only after success has been validated.

One enterprise AI initiative that attempted to automate 15 processes simultaneously illustrates the power of this approach. After reducing scope to just 2 of the highest-ROI processes, the team achieved success within four months and then expanded from a position of demonstrated value. The indicators that this strategy is working include a team that can focus rather than context-switch, users who see clear value in core use cases, technical complexity that becomes manageable, and quick wins that rebuild organizational confidence.

Strategy 2: Technical Reset ("Foundation Rebuild")

When the core technical approach is fundamentally flawed, a more dramatic intervention is required. This means acknowledging openly that the current technical approach is not working, bringing in external expertise for a fresh perspective, redesigning the architecture, model, or integration approach from first principles, and potentially replacing vendors or the technology stack entirely.

A healthcare AI project that had been using the wrong modeling approach demonstrates this strategy in action. The team brought in an academic advisor, switched from a rules-based approach to a machine learning approach, and subsequently achieved target performance levels. This strategy should only be pursued when the organization has the appetite for what amounts to starting over on the same problem.

Strategy 3: Adoption Blitz ("Win Hearts and Minds")

When the technology works but users are not adopting it, the intervention must be organizational rather than technical. This calls for an intensive change management campaign: identifying and empowering champions, co-designing improvements with users, ruthlessly removing friction points, creating incentives and gamification mechanisms, and securing visible executive support.

A sales AI tool with 18% adoption illustrates the potential of this approach. After a 90-day adoption blitz that included competitive leagues between teams, executive dashboards providing visibility into usage, and rapid UX improvements, adoption surged to 73%. The factors that made this work were incorporating user feedback rapidly through weekly iterations, granting champions genuine authority and recognition, and tracking and publicly celebrating adoption milestones.

Strategy 4: Integration Sprint ("Make It Work Together")

Many AI projects produce valuable outputs in isolation but fail because they do not integrate with existing workflows and systems. The remedy is a focused 30-to-60-day integration development sprint. This may require hiring integration specialists, building APIs, connectors, and data pipelines, embedding the AI within existing tools rather than maintaining it as a standalone application, and eliminating all manual data transfer.

One organization found that its AI insights required copy-pasting into the CRM system. After building a direct Salesforce integration, adoption jumped from 24% to 68% in just six weeks. The lesson is clear: integration must be treated as a core deliverable, not an afterthought.

Strategy 5: Team Swap ("Fresh Eyes, New Energy")

When the team is burned out, mired in political issues, or lacking critical skills, changing the people on the project may be the most effective intervention. This can mean replacing the project lead or, in extreme cases, the entire team. It may involve bringing in an external recovery specialist on a temporary basis, adding missing competencies in integration, change management, or domain expertise, and resetting team dynamics and morale.

One AI project led by a data scientist who lacked delivery experience was transformed when an experienced project manager was brought in and the original data scientist moved to a technical advisor role. The project delivered within five months. Personnel changes of this nature must be handled with professionalism, and institutional knowledge must be preserved throughout the transition.

Strategy 6: Expectation Reset ("Radical Honesty")

When a project is suffering from unrealistic expectations or unchecked scope creep, the most constructive intervention is a stakeholder reset meeting. This means presenting an honest assessment of what is achievable and what is not, renegotiating scope, timeline, budget, and success criteria, securing explicit buy-in to the revised targets or making the decision to kill the project, and communicating transparently about challenges throughout the organization.

One AI project that had promised 70% automation was reset to a 30% target. Stakeholders accepted the realistic goal, and the project ultimately succeeded at 32% automation, a genuine achievement that would have been reported as failure against the original 70% benchmark, where the actual result at the time of intervention was just 18%. The key to this strategy is framing the conversation as "setting the project up for success" rather than "lowering standards."

Recovery Kill Criteria

Before any recovery effort begins, the team must establish objective criteria for abandoning the attempt. Setting these boundaries in advance prevents sunk cost thinking from distorting decision-making as the recovery unfolds.

Time-based kill criteria should specify that if no measurable improvement appears within 60 days, if a critical milestone is missed by more than 30 days, or if the recovery extends beyond a six-month window, the effort should be terminated.

Performance-based kill criteria should define thresholds such as user adoption remaining below 40% after 90 days of intervention, technical performance falling below the minimum viable threshold after remediation, the ROI trajectory remaining negative after cost and scope adjustments, or team attrition exceeding 30% during the recovery period.

Resource-based kill criteria should trigger termination when recovery costs exceed 50% of remaining lifetime value, when required resources in terms of budget, skills, or tools are unavailable, or when the opportunity cost of continuing exceeds the expected value of recovery.

Strategic kill criteria should prompt termination when business case fundamentals have changed due to shifts in market conditions, competitive dynamics, or regulation, when a better alternative solution has been identified, when organizational priorities have shifted, or when political support has evaporated.

The critical principle is this: establish kill criteria before recovery begins and do not move the goalposts during the recovery process.

Recovery Success Metrics

Throughout the recovery effort, teams should track two categories of metrics on a weekly basis.

Leading indicators predict recovery success before results are fully visible. These include team morale and confidence measured through weekly surveys, user engagement with recovery improvements, the velocity at which issues are being resolved, stakeholder sentiment gathered through regular executive sponsor check-ins, and the availability of key dependencies.

Lagging indicators measure actual recovery progress. These encompass the user adoption trajectory, technical performance improvement, the cost burn rate relative to the recovery budget, timeline adherence to the recovery plan, and business value realized even in small increments.

A recovery health dashboard should synthesize these metrics into a simple status. Green indicates the recovery is on track with all metrics meeting targets. Yellow signals risk, with one to two metrics showing concerning trends. Red means the recovery is failing, with three or more metrics missed or a critical metric in failure.

The decision rule should be firm: two consecutive weeks in Red triggers a kill assessment. This prevents the gradual drift that allows failing recoveries to consume resources without producing results.

Graceful Termination Protocol

When a project cannot be recovered, the obligation shifts to terminating it with discipline and professionalism. A structured wind-down preserves organizational learning, protects team morale, and ensures financial accountability.

Week 1: Decision and Planning

The first week is devoted to making and documenting the executive decision to terminate, identifying a termination lead, developing a stakeholder communication plan, planning the user transition for any partially deployed AI capability, and reviewing contracts for termination clauses and obligations.

Weeks 2 and 3: Stakeholder Communication

Communication should proceed in a deliberate sequence. Executive sponsors are briefed first. The project team is then informed with offers of support and reassignment. Users receive notification along with a transition timeline. Vendors are contacted to negotiate contract wind-down terms. Finally, if the project was high-profile, the broader organization is updated.

Throughout this process, the communication principles remain constant: be honest about what did not work and why, frame the outcome as a learning opportunity rather than a failure, acknowledge the effort the team invested, and emphasize that killing bad projects is an act of good management, not an admission of defeat.

Weeks 3 and 4: Operational Wind-Down

The operational phase involves decommissioning technical infrastructure, migrating or archiving data, closing vendor contracts, reassigning team members to productive work, and documenting initial lessons learned while context is still fresh.

Weeks 4 Through 6: Knowledge Capture

The final phase centers on a blameless, learning-focused post-mortem. The documentation should capture what the team learned about the problem space, what worked even if the project as a whole did not, what did not work and the specific reasons why, and recommendations for future initiatives that address similar problems. These lessons should be shared across the organization to ensure that the investment, while not producing the intended AI capability, at least yields institutional knowledge that prevents the same mistakes from recurring.

Financial Close-Out

The financial wind-down requires final cost accounting, write-offs and asset disposition, contract settlements with vendors, and updates to the organization's AI portfolio tracking to reflect the project's outcome and total cost.

Prevention: Early Warning System

The most effective recovery is the one that never becomes necessary. Organizations should establish systematic early warning monitoring across their AI portfolio.

Monthly health checks on all AI projects should track budget variance, schedule variance, adoption metrics, technical performance, team sentiment, and stakeholder confidence. These six dimensions provide a comprehensive view of project health and surface problems before they become crises.

Quarterly deep dives on projects showing signs of struggle should include root cause analysis, recovery planning, and an explicit kill-or-continue decision. The discipline of quarterly review prevents the slow deterioration that makes recovery progressively more difficult and expensive.

Certain red flags demand immediate intervention regardless of the review cycle. These include falling three or more months behind schedule, exceeding budget by 50% or more, achieving user adoption of less than 50% of the target three months after launch, experiencing team attrition above 20%, and observing executive sponsor disengagement.

Key Takeaways

According to BCG and McKinsey research, 47% of struggling AI projects can be successfully recovered, but only when intervention happens within six to nine months of recognizing failure signals. The recovery success rate drops to 18% after twelve months, meaning the window for salvaging failing projects closes rapidly. Most failures stem from misdiagnosed root causes, where teams treat symptoms such as low adoption rather than underlying causes such as poor integration. Scope reduction stands as the most common successful recovery strategy: narrow focus to the highest-value use cases, prove value, then expand. Recovery also demands an honest stakeholder reset, because unrealistic expectations doom projects while radical honesty enables success. Teams must establish kill criteria before recovery begins so that objective metrics, rather than sunk cost thinking, drive the decision to continue or terminate. Organizations that adopt systematic recovery playbooks save an average of $1.8 million per rescued project and stop unsalvageable projects seven months faster, redirecting resources toward initiatives with genuine potential.

Common Questions

Normal course corrections involve minor scope or timeline adjustments (typically under 15% variance), issues being resolved within normal governance cadence, and sustained stakeholder confidence. Recovery is needed when there is 30%+ variance in cost or schedule, fundamental issues such as wrong problem definition, failing technical approach, or user rejection, low team morale, executive sponsor doubt, or more than six months without meaningful progress. If leaders are seriously asking whether to intervene, that is usually the signal to initiate a structured recovery assessment.

Success is primarily driven by timing of intervention, quality of diagnosis, decisiveness of action, and organizational commitment. Interventions within 6–9 months of recognizing failure have 70–80% success rates, while efforts started after 12 months drop to 15–25%. Successful recoveries feature fast root-cause analysis, clear 90-day recovery plans, adequate budget and talent, aligned expectations, and technically viable foundations. Failures are usually caused by denial and delay, treating symptoms instead of root causes, under-resourcing the recovery, and team burnout.

External consultants are most valuable when the internal team is burned out or politically constrained, when specialized recovery or integration expertise is missing, when an independent view is needed to cut through organizational bias, or when leadership needs external validation for tough decisions. Internal recovery is preferable when the team has the skills and energy, the issues are mainly organizational or political, deep domain knowledge is critical, or budgets are tight. A hybrid model—external recovery lead plus internal team—is often the most effective.

Morale is preserved by combining radical honesty with psychological safety. Leaders should provide a clear, realistic recovery path, focus on systemic causes rather than individual blame, and create visible quick wins that are celebrated weekly. Ensuring the team has the resources, authority, and support they need, recognizing their effort regardless of outcome, and being explicit about future opportunities if the project is terminated all help prevent burnout and disengagement.

Avoid zombie projects by institutionalizing quarterly portfolio reviews, objective kill criteria, and mandatory health checks on schedule, budget, adoption, and stakeholder confidence. Make opportunity costs visible so leaders see what other initiatives are being delayed by keeping a weak project alive. Build a culture that treats timely termination as good management, not failure, and hold sponsors accountable both for outcomes and for making disciplined kill decisions when predefined thresholds are breached.

Your Recovery Window Is Shorter Than You Think

BCG and McKinsey data indicate that nearly half of struggling AI projects can be salvaged—but only if a structured recovery is launched within 6–9 months of recognizing serious failure signals. After 12 months, recovery odds collapse to around 18%. If your project has been in the red for more than two quarters, you should treat recovery as an urgent, time-boxed intervention rather than a slow, incremental course correction.

Define Kill Criteria Before You Start Recovery

Before launching a recovery, agree on explicit, measurable thresholds for adoption, performance, cost, and time that will trigger termination. Document these criteria, get sponsor sign-off, and resist the temptation to move goalposts later. This protects you from sunk cost bias and ensures that recovery efforts remain disciplined, not open-ended.

47%

Share of struggling AI projects that can be recovered with structured intervention

Source: BCG Technology Practice, 2025

$1.8M

Average savings per AI project for organizations using systematic recovery playbooks

Source: McKinsey Digital, 2024

"The most common mistake in AI project recovery is treating visible symptoms—like low adoption or missed milestones—instead of the underlying structural causes in problem definition, integration, and change management."

AI Project Recovery Playbook

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  4. What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
  5. Enterprise Development Grant (EDG) — Enterprise Singapore. Enterprise Singapore (2024). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. OWASP Top 10 for Large Language Model Applications 2025. OWASP Foundation (2025). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Readiness & Strategy Solutions

INSIGHTS

Related reading

Talk to Us About AI Readiness & Strategy

We work with organizations across Southeast Asia on ai readiness & strategy programs. Let us know what you are working on.