Back to Insights
AI Readiness & StrategyFramework

AI Readiness Assessment: Are You Prepared?

February 17, 202512 minutes min readMichael Lansdowne Hauge
For:ConsultantCTO/CIOCHROCFOCEO/FounderIT ManagerCISOHead of OperationsData Science/ML

Comprehensive readiness framework with 50 diagnostic questions across strategy, data, organization, technology, and governance to determine if you're ready for AI.

Summarize and fact-check this article with:
Indian Woman Consultant Presentation - ai readiness & strategy insights

Key Takeaways

  • 1.AI readiness spans five dimensions: Strategy, Data, Organization, Technology, and Governance.
  • 2.Use the 50 questions to score each dimension from 1 to 5 and identify your weakest areas.
  • 3.Your lowest-scoring dimension is often the real constraint on AI success, regardless of your overall score.
  • 4.Translate assessment results into a 90–180 day roadmap with 3–5 concrete, owned actions.
  • 5.Reassess every 6–12 months to track progress and adjust your AI investment priorities.

Executive Summary: Only 15–30% of organizations are truly ready for successful AI implementation when they launch initiatives. This assessment provides a structured framework to diagnose your organization's AI readiness across five dimensions: Strategy, Data, Organization, Technology, and Governance. Use these 50 questions to identify gaps before investing.


How to Use This Assessment

Audience: Senior leaders (CEO, CTO/CIO, Operations, Innovation) and their direct reports. Format: 50 diagnostic questions across 5 dimensions. Scoring: Each question is scored from 1–5. 1 = Not at all true. 2 = Rarely true. 3 = Sometimes true / in progress. 4 = Mostly true. 5 = Fully true and consistently demonstrated. Outcome: Identify where you are strong, where you are exposed, and what to prioritize in the next 3–6 months.

You can complete this as an individual, but it is more powerful when done as a leadership team exercise, comparing scores and discussing gaps.


Dimension 1: Strategy (10 Questions)

Goal: Ensure AI initiatives are anchored in clear business value, not technology experimentation alone.

Clear business outcomes. We have defined, measurable business outcomes for AI (e.g., cost reduction, revenue growth, risk reduction, customer experience).

Prioritized use cases. We maintain a prioritized list of AI use cases with estimated impact, feasibility, and time-to-value.

Strategic alignment. Our AI initiatives are explicitly linked to our overall business strategy and key strategic bets.

Executive sponsorship. We have at least one accountable executive sponsor for AI with decision-making authority and budget.

Value hypothesis per use case. Each AI use case has a clear value hypothesis, success metrics, and owner.

Customer and user focus. We validate AI ideas with real users or customers before committing significant investment.

Build vs. buy clarity. We have a clear philosophy for when to build in-house vs. buy or partner for AI capabilities.

Portfolio balance. Our AI portfolio balances quick wins (3–6 months) with longer-term, transformational bets.

Competitive awareness. We understand how peers and competitors are using AI and where we can differentiate.

Funding model. We have a defined funding model for AI (e.g., central budget, business-unit co-funding, innovation fund).

Score for Strategy: Add your scores for questions 1–10 and divide by 10.


Dimension 2: Data (10 Questions)

Goal: Ensure your data is usable, accessible, and governed enough to support AI.

Data availability. The data required for our priority AI use cases exists somewhere in the organization.

Data quality. Key data sources are reasonably accurate, complete, and timely for AI purposes.

Data accessibility. Teams can access needed data (with appropriate controls) without months of negotiation or manual work.

Data integration. We can combine data from multiple systems (e.g., CRM, ERP, support, operations) for analytics and AI.

Metadata and documentation. We maintain basic documentation of key data sets (definitions, owners, refresh cycles).

Single source of truth. For critical entities (customers, products, employees, assets), we have agreed “golden records” or master data.

Unstructured data readiness. We have a plan for using unstructured data (documents, emails, tickets, call transcripts) in AI use cases.

Data ownership. Each major data domain has a clear owner accountable for quality and access.

Privacy and consent. We understand what data can and cannot be used for AI under current privacy, consent, and contractual constraints.

Data platform. We have (or are implementing) a central data platform or warehouse/lake that AI teams can build on.

Score for Data: Add your scores for questions 11–20 and divide by 10.


Dimension 3: Organization & Talent (10 Questions)

Goal: Ensure you have the people, skills, and operating model to deliver AI outcomes.

Executive understanding. Senior leaders have a practical understanding of AI’s capabilities, limits, and risks.

AI literacy. We provide basic AI literacy training for managers and key roles (product, operations, HR, finance, etc.).

Product and process owners. Each AI use case has a clear business owner responsible for adoption and outcomes.

Cross-functional teams. We can form cross-functional teams (business, data, engineering, legal, change) to deliver AI use cases.

Technical talent. We have access to the necessary technical skills (data engineers, ML engineers, prompt engineers, architects) via employees or partners.

Change management. We have a structured approach to change management (communication, training, support) for AI-driven changes.

Incentives and KPIs. Incentives and performance metrics encourage adoption of AI solutions rather than preserving legacy ways of working.

Role and workforce impact. We assess how AI will change roles, tasks, and workforce needs, and we communicate this proactively.

Center of excellence or similar. We have (or are building) an AI/analytics center of excellence, guild, or community of practice.

Partner ecosystem. We have trusted external partners (vendors, consultants, integrators) we can draw on for AI expertise.

Score for Organization & Talent: Add your scores for questions 21–30 and divide by 10.


Dimension 4: Technology & Architecture (10 Questions)

Goal: Ensure your technology stack can support secure, scalable AI experimentation and deployment.

Core infrastructure. Our core infrastructure (cloud/on-prem) can support AI workloads without major re-architecture.

Access to AI platforms. We have access to at least one enterprise-grade AI platform or provider (e.g., cloud AI services, LLM APIs) under proper contracts.

Environment for experimentation. Teams have a safe, governed environment (sandboxes, dev environments) to experiment with AI.

APIs and integration. Our systems expose APIs or integration mechanisms that allow AI solutions to be embedded into workflows.

Security and identity. We have strong identity and access management (IAM) and security controls for AI-related systems and data.

Monitoring and observability. We can monitor AI applications in production (performance, usage, errors, latency, costs).

Model lifecycle management. We have at least a basic process for versioning, testing, and updating models or prompts.

Tooling standardization. We are converging on a small set of standard tools/platforms for AI rather than ad-hoc experimentation everywhere.

Performance and cost management. We track and manage the cost and performance of AI workloads (e.g., API usage, GPU time).

Resilience and fallback. Critical AI-enabled workflows have fallbacks or manual overrides if models fail or behave unexpectedly.

Score for Technology & Architecture: Add your scores for questions 31–40 and divide by 10.


Dimension 5: Governance, Risk & Ethics (10 Questions)

Goal: Ensure AI is safe, compliant, and aligned with your values and regulatory environment.

AI policy. We have a documented AI policy covering acceptable use, data handling, and employee responsibilities.

Risk assessment. We assess risks (legal, ethical, operational, reputational) for AI use cases before deployment.

Regulatory awareness. We understand the regulatory landscape relevant to our AI use (e.g., privacy, sector-specific rules, AI-specific regulations where applicable).

Human-in-the-loop. For high-risk decisions, we ensure appropriate human oversight and final accountability.

Bias and fairness. We consider potential bias and fairness issues in data and models, and we have mitigation approaches.

Transparency and explainability. Where needed, we can explain how AI-assisted decisions are made in terms that regulators, customers, and employees can understand.

Incident response. We have a process for handling AI-related incidents (e.g., harmful outputs, data leakage, model failures).

Vendor and third-party risk. We evaluate AI vendors and tools for security, compliance, and alignment with our policies.

Ethical principles. We have articulated ethical principles for AI (e.g., safety, accountability, transparency) and use them in decision-making.

Auditability and logging. We log AI system activity sufficiently to support audits, investigations, and continuous improvement.

Score for Governance, Risk & Ethics: Add your scores for questions 41–50 and divide by 10.


Scoring & Interpretation

For each dimension, you should now have an average score between 1.0 and 5.0.

Per-Dimension Readiness Levels

4.0 – 5.0: Strong Readiness. You have a solid foundation. Focus on scaling, optimization, and more ambitious use cases.

3.0 – 3.9: Emerging Readiness. You can succeed with targeted AI projects, but you will encounter friction. Prioritize closing the most critical gaps.

2.0 – 2.9: At Risk. Significant weaknesses will likely derail AI initiatives. Address foundational issues before large investments.

1.0 – 1.9: Not Ready. AI projects are likely to fail or remain pilots. Focus on basic strategy, data, and governance capabilities first.

Overall Readiness Score

Add the five dimension averages together. Divide by 5 to get your overall AI readiness score.

Use the same thresholds above to interpret your overall score, but remember: your weakest dimension is often your true readiness level. A single red area (e.g., Governance) can block or severely constrain everything else.


Common Readiness Patterns

Strong Tech, Weak Strategy. You have tools and talent but lack clear business outcomes. Risk: “AI theater” and low ROI.

Strong Strategy, Weak Data. You know what you want to do, but data is fragmented or low quality. Risk: delays, rework, and compromised models.

Strong Pilots, Weak Change Management. POCs succeed, but adoption stalls. Risk: AI remains in labs, not in the business.

Strong Governance, Weak Experimentation. Heavy controls with little room to test. Risk: you fall behind more agile competitors.

Identify which pattern best describes you and use it to guide your next steps.


Turning Assessment Results into an Action Plan

Use your scores to build a 90–180 day roadmap.

Identify your top 2–3 constraint dimensions. Look for any dimension scoring below 3.0. These are your primary blockers.

Select 3–5 high-impact actions. Examples by dimension:

Strategy: Run a cross-functional workshop to prioritize 5–10 AI use cases with clear value hypotheses. Data: Stand up a small data engineering squad to make 2–3 critical data sets AI-ready. Organization: Launch an AI literacy program for managers and define product owners for top use cases. Technology: Establish a standard AI platform and a secure experimentation environment. Governance: Publish an AI acceptable-use policy and define a lightweight review process for new use cases.

Assign owners and timelines. Every action should have a named owner, clear deliverables, and a 30/60/90-day check-in.

Pilot, then scale. Choose 1–3 flagship AI use cases that are: Feasible with your current data and tech. Sponsored by a committed business owner. Measurable in terms of value.

Re-assess every 6–12 months. Repeat this assessment to track progress and adjust your roadmap.


Practical Tips for Running This Assessment with Your Team

Run it live: Use a workshop format (60–90 minutes) with key stakeholders. Score individually, then compare: Have participants score silently, then discuss differences. Focus on deltas, not averages: Large gaps between functions (e.g., IT vs. business) are signals of misalignment. Capture concrete examples: For each low-scoring question, ask: “What’s a recent example that illustrates this?”. End with commitments: Close the session by agreeing on 3–5 concrete actions and owners.


FAQs

Who should own AI readiness in the organization?

Ownership is typically shared: a senior business leader (e.g., COO, Chief Digital/Transformation Officer) for outcomes, and a technology/data leader (e.g., CIO/CTO, CDO) for platforms and capabilities. Many organizations also establish an AI or data & analytics steering group to coordinate across functions.

How long does it take to become “AI ready”?

Most organizations can materially improve their readiness in 3–6 months with focused effort on a few critical gaps. Full maturity across all five dimensions is usually a multi-year journey, but you do not need perfection to start delivering value.

Can we start AI projects if our readiness scores are low?

Yes, but you should start small and be explicit about the risks. Choose low-dependency use cases (e.g., internal productivity tools) while you invest in foundational capabilities like data quality and governance.

How often should we repeat this assessment?

A cadence of every 6–12 months works well for most organizations. If you are in a fast-moving transformation or heavily regulated industry, you may want to reassess key dimensions (like Governance and Technology) more frequently.

Do we need a dedicated AI team to be ready?

Not necessarily. You need access to AI skills, which can come from a mix of internal teams, upskilling, and external partners. Over time, many organizations evolve toward a hybrid model: a small central AI/ML or data team plus embedded roles in key business units.


Next Steps

Complete the assessment with your leadership team. Calculate your dimension and overall scores. Identify your top 2–3 constraint areas. Define a 90-day action plan with clear owners. Revisit and update your scores as you execute.

Use this assessment as a living tool to guide where you invest time, budget, and leadership attention so that AI initiatives translate into real, measurable business outcomes.

Common Questions

Ownership is typically shared: a senior business leader (e.g., COO, Chief Digital/Transformation Officer) for outcomes, and a technology/data leader (e.g., CIO/CTO, CDO) for platforms and capabilities. Many organizations also establish an AI or data & analytics steering group to coordinate across functions.

Most organizations can materially improve their readiness in 3–6 months with focused effort on a few critical gaps. Full maturity across all five dimensions is usually a multi-year journey, but you do not need perfection to start delivering value.

Yes, but you should start small and be explicit about the risks. Choose low-dependency use cases, such as internal productivity tools, while you invest in foundational capabilities like data quality and governance.

A cadence of every 6–12 months works well for most organizations. If you are in a fast-moving transformation or heavily regulated industry, you may want to reassess key dimensions like Governance and Technology more frequently.

You do not strictly need a dedicated AI team, but you do need access to AI skills. This can come from a mix of internal teams, upskilling, and external partners. Over time, many organizations move toward a hybrid model with a small central AI team plus embedded roles in key business units.

Your Weakest Dimension Sets Your Real Readiness Level

Even if you score highly on technology or talent, a single weak area—such as governance, data, or change management—can derail AI initiatives. Treat this assessment as a way to find your primary constraints, not as a vanity score.

15–30%

Estimated share of organizations truly ready for successful AI implementation at launch

Source: Synthesis of major industry surveys

"AI success is less about algorithms and more about whether your strategy, data, people, and governance are ready to support them."

AI Readiness Assessment Framework

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  4. What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
  5. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  6. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  7. OECD Principles on Artificial Intelligence. OECD (2019). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Readiness & Strategy Solutions

Related Resources

INSIGHTS

Related reading

Talk to Us About AI Readiness & Strategy

We work with organizations across Southeast Asia on ai readiness & strategy programs. Let us know what you are working on.