Back to Insights
AI Readiness & StrategyChecklistBeginner

AI Readiness Checklist: 25 Questions to Ask Before Your First AI Project

October 1, 202510 min readMichael Lansdowne Hauge
For:IT LeadersBusiness OwnersProject ManagersOperations Managers

Assess your organization's AI readiness with this 25-question checklist covering data, infrastructure, skills, governance, and strategy. Includes scoring guide and self-assessment template.

Consulting Research Analysis - ai readiness & strategy insights

Key Takeaways

  • 1.Assess organizational readiness across data, technology, people, and governance dimensions
  • 2.Identify gaps and blockers before committing resources to AI projects
  • 3.Use these 25 questions to structure conversations with stakeholders and vendors
  • 4.Focus on foundational capabilities that enable multiple AI use cases rather than point solutions
  • 5.Document your current state honestly to create a realistic improvement roadmap

AI Readiness Checklist: 25 Questions to Ask Before Your First AI Project

Executive Summary

  • This checklist helps organizations assess their readiness for AI adoption through 25 essential questions across five dimensions
  • The questions are designed for business leaders, not technical specialists—no prior AI expertise required
  • Each question includes guidance on what "ready" looks like and common red flags to watch for
  • Use this as a self-assessment tool or discussion guide for leadership team conversations
  • Scoring yourself honestly reveals where to focus preparation efforts before investing in AI
  • Organizations scoring below 60% should address foundational gaps before launching AI initiatives
  • This checklist complements a formal AI readiness assessment but doesn't replace one

Why This Matters Now

Most AI projects fail not because of technology limitations, but because organizations weren't ready for them. The failure rate—estimated at 70-85% by various studies—reflects a preparation gap, not a capability gap.

This checklist helps you identify that gap before you invest significant resources. Twenty minutes with these questions can save months of misdirected effort.


How to Use This Checklist

For self-assessment: Answer each question honestly. Score yourself using the guidance provided. Total your score at the end.

For team discussion: Use these questions to structure a leadership conversation about AI readiness. Different perspectives often reveal blind spots.

For planning: Low-scoring areas indicate where to focus before launching AI initiatives.


Section 1: Data Readiness

Question 1: Do you know where your critical business data lives?

Why it matters: AI systems need data. If you can't locate and access your data, you can't train or operate AI effectively.

Green flags:

  • Documented data inventory exists
  • Data sources are accessible via APIs or standard formats
  • Data ownership is clearly assigned

Red flags:

  • Data scattered across spreadsheets, emails, and personal drives
  • No one knows who owns which data
  • Key data is locked in legacy systems without export capability

Score: 0 (No inventory) | 1 (Partial inventory) | 2 (Complete, documented inventory)


Question 2: Can you assess the quality of your data?

Why it matters: Poor data quality leads to poor AI outcomes. "Garbage in, garbage out" applies directly.

Green flags:

  • Data quality metrics are tracked (completeness, accuracy, timeliness)
  • Regular data cleaning processes exist
  • Data quality issues are flagged and resolved

Red flags:

  • No one has audited data quality recently
  • Known data quality issues are tolerated
  • Different systems have conflicting versions of the same data

Score: 0 (No quality assessment) | 1 (Ad hoc quality checks) | 2 (Systematic quality management)


Question 3: Is your data properly labeled and categorized?

Why it matters: Many AI applications require labeled training data. Inconsistent categorization creates confusion for both humans and machines.

Green flags:

  • Consistent naming conventions exist
  • Data dictionaries define key terms
  • Historical data has been cleaned and standardized

Red flags:

  • Same concepts have different names in different systems
  • No data dictionary exists
  • Categories have changed over time without documentation

Score: 0 (Inconsistent/unlabeled) | 1 (Partially standardized) | 2 (Well-labeled with documentation)


Question 4: Do you have enough historical data for AI use cases you're considering?

Why it matters: Some AI applications require substantial historical data for training. Insufficient data limits what's possible.

Green flags:

  • 2+ years of relevant historical data available
  • Data volume is sufficient for statistical significance
  • Data covers various scenarios and edge cases

Red flags:

  • Limited historical records
  • Data collection only started recently
  • Significant gaps in historical data

Score: 0 (Insufficient data) | 1 (Marginal data volume) | 2 (Robust historical data)


Question 5: Can you share data across departments when needed?

Why it matters: AI use cases often require combining data from multiple sources. Siloed data limits AI potential.

Green flags:

  • Data sharing agreements exist between departments
  • Technical integration between systems is possible
  • Privacy and security protocols enable appropriate sharing

Red flags:

  • Departments protect data territorially
  • No technical way to combine data from different systems
  • Legal or compliance concerns block data sharing

Score: 0 (Heavily siloed) | 1 (Some sharing possible) | 2 (Data flows appropriately across organization)


Section 2: Technical Infrastructure

Question 6: Is your current IT infrastructure cloud-enabled?

Why it matters: Most modern AI tools are cloud-based or cloud-optimized. Legacy on-premises infrastructure limits options.

Green flags:

  • Cloud infrastructure in place (AWS, Azure, GCP, or similar)
  • Hybrid capabilities exist for sensitive data
  • IT team has cloud deployment experience

Red flags:

  • Entirely on-premises with no cloud strategy
  • Security policies prohibit cloud usage
  • IT team lacks cloud experience

Score: 0 (No cloud capability) | 1 (Limited cloud adoption) | 2 (Cloud-ready infrastructure)


Question 7: Can your systems integrate with new tools via APIs?

Why it matters: AI tools need to connect with your existing systems. API capability enables integration.

Green flags:

  • Core systems have documented APIs
  • Integration platform or middleware exists
  • IT team has experience building integrations

Red flags:

  • Legacy systems with no API support
  • Custom integrations are expensive and slow
  • No integration experience in-house

Score: 0 (No API capability) | 1 (Limited integration options) | 2 (API-first architecture)


Question 8: Do you have adequate computing resources for AI workloads?

Why it matters: AI processing can be resource-intensive. Inadequate infrastructure causes performance issues.

Green flags:

  • Scalable compute resources available (cloud or on-premises)
  • Budget allocated for infrastructure upgrades
  • Performance monitoring in place

Red flags:

  • Current systems already at capacity
  • No budget for infrastructure investment
  • Unknown resource requirements for AI

Score: 0 (Inadequate resources) | 1 (Marginal capacity) | 2 (Scalable resources available)


Question 9: Is your cybersecurity posture ready for AI?

Why it matters: AI systems introduce new attack vectors. Security must evolve alongside AI adoption.

Green flags:

Red flags:

  • Security team unaware of AI plans
  • No security requirements for AI tools
  • No process for assessing AI vendor security

Score: 0 (Security not considered) | 1 (Basic security awareness) | 2 (Security-integrated AI planning)


Question 10: Do you have a process for testing and validating new technology?

Why it matters: AI tools need proper testing before production deployment. Ad hoc approaches increase risk.

Green flags:

  • Staging/testing environments available
  • Change management process exists
  • Rollback procedures documented

Red flags:

  • New tools deployed directly to production
  • No testing environment
  • No formal change management

Score: 0 (No testing process) | 1 (Informal testing) | 2 (Formal testing and validation process)


Section 3: Skills & Capabilities

Question 11: Does your leadership team understand AI fundamentals?

Why it matters: Leaders who understand AI make better strategic decisions and set realistic expectations.

Green flags:

  • Leaders can explain AI concepts in business terms
  • Realistic expectations about AI capabilities
  • Active engagement in AI strategy discussions

Red flags:

  • AI treated as magic or buzzword
  • Unrealistic expectations about timelines and outcomes
  • Leadership delegating all AI decisions to IT

Score: 0 (No AI literacy) | 1 (Basic awareness) | 2 (Solid AI understanding)


Question 12: Do you have technical talent who can work with AI systems?

Why it matters: Someone needs to implement, maintain, and troubleshoot AI systems—whether internal or external.

Green flags:

  • Data analysts or data scientists on staff
  • IT team has modern technology skills
  • Clear plan for building or acquiring AI talent

Red flags:

  • No data or analytics capabilities
  • IT team stretched thin with legacy maintenance
  • No plan for technical skill development

Score: 0 (No relevant technical skills) | 1 (Basic analytics capability) | 2 (AI-capable technical team)


Question 13: Are your employees open to adopting new technology?

Why it matters: User adoption determines whether AI delivers value. Resistance undermines even the best implementations.

Green flags:

  • History of successful technology adoption
  • Employees suggest process improvements
  • Culture values learning and adaptation

Red flags:

  • Previous technology rollouts faced resistance
  • "That's how we've always done it" culture
  • Fear of job displacement is prevalent

Score: 0 (High resistance expected) | 1 (Mixed adoption history) | 2 (Technology-embracing culture)


Question 14: Do you have capacity for AI training and change management?

Why it matters: AI adoption requires training time. Organizations at full capacity struggle to learn new tools.

Green flags:

  • Training budget and time allocated
  • Change management experience exists
  • Employees have bandwidth for learning

Red flags:

  • No training budget or time
  • Everyone is already overworked
  • No change management experience

Score: 0 (No capacity) | 1 (Limited capacity) | 2 (Training and change management resources available)


Question 15: Can you access external expertise when needed?

Why it matters: Few organizations have all required AI skills in-house. Access to external expertise accelerates progress.

Green flags:

  • Relationships with consultants or advisors
  • Budget for external expertise
  • Ability to evaluate and select vendors

Red flags:

  • No budget for external help
  • No network of potential advisors
  • Unable to evaluate external expertise quality

Score: 0 (No external access) | 1 (Limited external resources) | 2 (Strong external network and budget)


Section 4: Governance & Risk

Question 16: Do you have data governance policies in place?

Why it matters: AI amplifies the impact of data—both positive and negative. Governance ensures responsible use.

Green flags:

  • Data governance policies documented
  • Clear data ownership and stewardship
  • Regular policy review process

Red flags:

  • No formal data policies
  • Unclear data ownership
  • Policies outdated or not followed

Score: 0 (No data governance) | 1 (Basic policies exist) | 2 (Mature data governance program)


Question 17: Have you considered AI-specific risks?

Why it matters: AI introduces unique risks (bias, hallucination, security vulnerabilities) that traditional risk frameworks may not address.

Green flags:

  • AI risks discussed at leadership level
  • risk assessment includes AI considerations
  • Mitigation strategies identified

Red flags:

  • AI risks not considered
  • "We'll figure it out later" mindset
  • No one accountable for AI risk

Score: 0 (Risks not considered) | 1 (Awareness without action) | 2 (AI risks actively managed)


Question 18: Are you compliant with relevant data protection regulations?

Why it matters: AI processing of personal data has regulatory implications. Non-compliance creates legal and financial risk.

Green flags:

  • PDPA compliance confirmed (or relevant local regulations)
  • Data protection processes documented
  • Legal/compliance team engaged

Red flags:

  • Unsure about compliance status
  • No legal review of data practices
  • Data protection not a priority

Score: 0 (Compliance unknown/uncertain) | 1 (Basic compliance) | 2 (Robust compliance program)


Question 19: Do you have a process for evaluating AI vendors?

Why it matters: AI vendor selection affects security, compliance, and outcomes. Structured evaluation reduces risk.

Green flags:

  • Vendor evaluation criteria defined
  • Security and compliance requirements documented
  • Due diligence process exists

Red flags:

  • No vendor evaluation process
  • Decisions based solely on features or price
  • No security requirements for vendors

Score: 0 (No evaluation process) | 1 (Informal evaluation) | 2 (Structured vendor assessment)


Question 20: Who will be accountable for AI initiatives?

Why it matters: Accountability ensures follow-through. AI initiatives without clear ownership drift or fail.

Green flags:

  • Executive sponsor identified
  • Clear roles and responsibilities defined
  • Decision-making authority established

Red flags:

  • No clear owner
  • Accountability shared (meaning no one accountable)
  • IT expected to drive without business engagement

Score: 0 (No accountability) | 1 (Unclear accountability) | 2 (Clear ownership and accountability)


Section 5: Strategic Alignment

Question 21: Is AI part of your business strategy?

Why it matters: AI without strategic purpose becomes a solution looking for a problem. Strategic alignment drives value.

Green flags:

  • AI explicitly mentioned in strategic plans
  • Business problems identified that AI could address
  • AI investment aligned with business priorities

Red flags:

  • AI pursued because competitors are doing it
  • No clear business case for AI
  • AI disconnected from strategy

Score: 0 (No strategic alignment) | 1 (Loose connection) | 2 (AI integrated into strategy)


Question 22: Have you identified specific use cases for AI?

Why it matters: General interest in AI doesn't produce results. Specific use cases enable focused implementation.

Green flags:

  • Concrete use cases documented
  • Use cases prioritized by value and feasibility
  • Business owners engaged in use case development

Red flags:

  • No specific use cases identified
  • "We want to use AI" without knowing where
  • IT-generated list without business input

Score: 0 (No use cases) | 1 (Vague use cases) | 2 (Prioritized, specific use cases)


Question 23: Do you have realistic expectations about AI timelines and outcomes?

Why it matters: Unrealistic expectations lead to disappointment and abandoned initiatives. Calibrated expectations enable success.

Green flags:

  • Understanding that AI requires iteration
  • Pilot-first mentality before scaling
  • Success metrics defined upfront

Red flags:

  • Expecting immediate transformation
  • No tolerance for learning curve
  • Success not defined

Score: 0 (Unrealistic expectations) | 1 (Somewhat calibrated) | 2 (Realistic, measured expectations)


Question 24: Is there budget allocated for AI initiatives?

Why it matters: AI requires investment. Organizations that expect free transformation are disappointed.

Green flags:

  • Specific AI budget allocated
  • Budget covers technology, training, and change management
  • Multi-year commitment, not one-time expense

Red flags:

  • No AI budget
  • Expecting to fund AI from existing budgets
  • One-time allocation without ongoing commitment

Score: 0 (No budget) | 1 (Inadequate budget) | 2 (Appropriate budget allocated)


Question 25: Do you have executive sponsorship for AI?

Why it matters: AI initiatives without executive support lose momentum when obstacles arise. Sponsorship ensures sustained commitment.

Green flags:

  • Named executive sponsor
  • Sponsor actively engaged, not just named
  • AI discussed regularly at leadership level

Red flags:

  • No executive sponsor
  • Sponsor in name only
  • AI delegated entirely to middle management

Score: 0 (No sponsorship) | 1 (Nominal sponsorship) | 2 (Active executive sponsorship)


Scoring Your Readiness

Calculate your total score across all 25 questions (maximum 50 points).

Score RangeReadiness LevelRecommendation
0-15Not ReadyFocus on foundational gaps before AI investment
16-25Early StageAddress critical gaps; consider targeted pilots only
26-35DevelopingReady for structured pilot programs
36-45ReadyPrepared for broader AI adoption
46-50AdvancedStrong foundation; focus on scaling and optimization

AI Readiness Self-Assessment Form Template

ORGANIZATION: _______________________
DATE: _______________________
ASSESSED BY: _______________________

SECTION SCORES:
- Data Readiness (Q1-5): ___ / 10
- Technical Infrastructure (Q6-10): ___ / 10
- Skills & Capabilities (Q11-15): ___ / 10
- Governance & Risk (Q16-20): ___ / 10
- Strategic Alignment (Q21-25): ___ / 10

TOTAL SCORE: ___ / 50

LOWEST SCORING SECTION: _______________________

TOP 3 PRIORITY GAPS:
1. _______________________
2. _______________________
3. _______________________

RECOMMENDED NEXT STEPS:
_________________________________________________
_________________________________________________
_________________________________________________

FOLLOW-UP ASSESSMENT DATE: _______________________

Frequently Asked Questions


Next Steps

If your assessment reveals gaps, you have two options:

  1. Self-directed improvement: Use the gap areas to guide your preparation efforts
  2. Guided assessment: Engage experts for a comprehensive AI readiness assessment

Either path is valid. The important thing is to understand your starting point before investing in AI.

Book an AI Readiness Audit with Pertama Partners for a comprehensive assessment with actionable recommendations.


Frequently Asked Questions

No. A low score means you should address foundational gaps before investing in AI. It's better to know this now than after a failed project.

Michael Lansdowne Hauge

Founder & Managing Partner

Founder & Managing Partner at Pertama Partners. Founder of Pertama Group.

AI ReadinessChecklistSelf-AssessmentSMBAI StrategyAI readiness checklist downloadpre-implementation assessmentAI preparedness evaluationAI readiness checklist for small businessSMB AI project preparationpre-AI implementation assessment

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit