Back to Insights
AI Change Management & TrainingFrameworkPractitioner

AI Training Needs Assessment: How to Identify Skill Gaps

November 17, 202510 min readMichael Lansdowne Hauge
For:L&D LeadersHR LeadersAI Project ManagersOperations Directors

Learn how to identify AI skill gaps across your organisation with a structured needs assessment approach. Includes skills matrix template, assessment methods, and implementation checklist.

Indonesian Facilitator - ai change management & training insights

Key Takeaways

  • 1.Conduct systematic AI training needs assessment
  • 2.Identify skill gaps across different roles and functions
  • 3.Prioritize training investments based on business impact
  • 4.Map current capabilities against AI readiness requirements
  • 5.Create targeted training plans based on assessment findings

AI Training Needs Assessment: How to Identify Skill Gaps

Your organisation is ready to embrace AI. You've heard the board's mandate, seen competitors moving, and your teams are asking questions. But before you book that "Introduction to AI" workshop for everyone, pause.

The most common AI training mistake? Delivering the same generic content to everyone, regardless of role, existing knowledge, or actual job requirements. The result: wasted budget, disengaged employees, and skills that don't translate to real work.

An AI training needs assessment changes this equation. It tells you exactly who needs what, at what level, and in what sequence—so your training investment actually moves the needle.


Executive Summary

  • AI training needs assessment identifies specific skill gaps across roles before investing in training programs
  • Generic AI training fails because skills requirements vary dramatically between executives, managers, and frontline staff
  • Three skill categories matter: Foundational (understanding), Applied (using tools), and Strategic (decision-making)
  • Assessment methods range from self-surveys to practical skill tests—use multiple approaches for accuracy
  • Role mapping is essential: Define what AI competency actually means for each function
  • Prioritise gaps by business impact, not by largest deficit—some gaps matter more than others
  • Assessment is not one-time: Build in regular reassessment as AI capabilities evolve
  • Output should be actionable: Role-specific training paths, not generic recommendations

Why This Matters Now

The AI skills gap is widening. A 2024 study by the World Economic Forum found that 44% of workers' core skills will change in the next five years, with AI literacy at the centre of that shift. Yet most organisations are responding with broad-brush training that treats a CFO and a customer service representative as having identical learning needs.

This creates three problems:

Wasted resources. Generic "AI 101" courses consume budget without building job-relevant capability. Training that doesn't connect to daily work gets forgotten within weeks.

Frustrated employees. Executives forced through basic prompt engineering feel patronised. Frontline staff thrown into strategic AI discussions feel overwhelmed. Neither gets what they need.

Competitive disadvantage. While you're delivering one-size-fits-all training, competitors are building targeted capabilities that translate directly to productivity gains.

A proper needs assessment solves this by matching training to actual requirements—role by role, skill by skill.

If you're still designing your overall training strategy, see (/insights/designing-ai-training-program-framework-ld-leaders) for guidance on building an effective AI training program from the ground up.


Definitions and Scope

What Is an AI Training Needs Assessment?

An AI training needs assessment is a structured process to:

  • Identify what AI-related skills your workforce currently has
  • Define what AI skills each role actually needs
  • Map the gaps between current and required capabilities
  • Prioritise those gaps based on business impact
  • Translate findings into targeted training recommendations

It differs from a general AI readiness assessment (which evaluates data, infrastructure, and governance) by focusing specifically on human capabilities.

Skills vs. Knowledge vs. Mindset

A complete assessment examines three dimensions:

DimensionDefinitionExample
KnowledgeUnderstanding concepts and terminologyKnowing what a large language model is and how it works
SkillsAbility to perform tasksWriting effective prompts, evaluating AI outputs, configuring an AI tool
MindsetAttitudes and approachesWillingness to experiment, appropriate scepticism, ethical awareness

Most assessments over-index on knowledge and under-assess skills and mindset. Knowledge without application is trivia.


The AI Skills Taxonomy

Before you can assess gaps, you need a framework for what "AI competency" means. We use a three-tier model:

Tier 1: Foundational AI Skills

Required by nearly everyone in an AI-enabled organisation:

  • Understanding what AI can and cannot do
  • Recognising AI outputs and their limitations
  • Basic AI ethics and responsible use principles
  • Knowing when to trust and when to verify AI outputs
  • Organisational AI policy awareness

For foundational training curriculum, see (/insights/ai-literacy-training) on AI literacy training essentials.

Tier 2: Applied AI Skills

Required by staff who use AI tools in their daily work:

  • Prompt engineering and effective AI interaction
  • Evaluating and improving AI outputs
  • Integrating AI tools into existing workflows
  • Data preparation and quality awareness
  • Tool-specific competencies for role-relevant applications

Tier 3: Strategic AI Skills

Required by leaders and specialists making AI decisions:

  • AI opportunity identification and use case prioritisation
  • AI project scoping and requirements definition
  • Vendor evaluation and selection
  • AI risk assessment and governance
  • AI-enabled process redesign
  • ROI measurement and business case development

For executive-specific training considerations, see (/insights/ai-training-for-executives).


AI Skills Matrix Template

Use this matrix to define expected competencies by function. Adapt levels to your organisation.

Role CategoryFoundationalAppliedStrategic
Executive LeadershipProficientAwarenessExpert
Middle ManagementProficientProficientCompetent
Technical SpecialistsExpertExpertProficient
Business AnalystsProficientExpertCompetent
Frontline StaffCompetentCompetentAwareness
Support FunctionsCompetentCompetentAwareness

Proficiency Levels:

  • Awareness: Understands the concept; cannot apply independently
  • Competent: Can apply with guidance or reference materials
  • Proficient: Can apply independently and troubleshoot issues
  • Expert: Can teach others and handle novel situations

Step-by-Step Assessment Process

Step 1: Define Assessment Scope and Objectives

Start by clarifying what you're trying to achieve:

Scope questions:

  • Which departments or roles are in scope?
  • Are you assessing for current AI tools or future capabilities?
  • What's the timeline for assessment completion?
  • Who owns the assessment results?

Objective examples:

  • Identify training needs for upcoming AI tool deployment
  • Build baseline for measuring training effectiveness
  • Justify training budget with quantified gap data
  • Prioritise limited training resources

Document your scope and objectives before proceeding. This prevents scope creep and ensures stakeholder alignment.

Step 2: Map Roles to AI Impact Categories

Not all roles are equally affected by AI. Categorise your roles:

High AI Impact: Roles where AI will fundamentally change daily work

  • Examples: Customer service, content creation, data analysis, legal research

Medium AI Impact: Roles where AI will augment but not transform work

  • Examples: Project management, HR business partners, account management

Low AI Impact: Roles with limited AI interaction in the near term

  • Examples: Facilities management, manual trades (though this is changing)

This mapping helps you prioritise assessment effort and training investment.

Step 3: Select Assessment Methodology

Choose methods appropriate to your scale and objectives:

MethodBest ForProsCons
Self-Assessment SurveyLarge-scale baselineFast, low cost, covers everyoneSelf-perception bias
Manager EvaluationValidating self-assessmentsAdds external perspectiveManager may lack AI knowledge
Practical Skill TestVerifying actual capabilityObjective, accurateTime-intensive to administer
Scenario-Based AssessmentEvaluating judgmentTests applied thinkingRequires careful design
Focus GroupsUnderstanding contextRich qualitative dataSmall sample, hard to scale

Recommendation: Use self-assessment for broad baseline, supplement with practical tests for high-impact roles.

Step 4: Develop Assessment Instruments

Create your assessment tools:

For self-assessment surveys:

  • Use behavioural indicators, not self-ratings of competence
  • Bad: "Rate your AI knowledge (1-5)"
  • Good: "I can identify three appropriate use cases for AI in my role: Yes/No/Unsure"

Sample self-assessment questions by tier:

Foundational:

  • I can explain what generative AI is to a colleague who has never used it
  • I understand our organisation's AI acceptable use policy
  • I can identify when AI output might be inaccurate or biased

Applied:

  • I use AI tools at least weekly in my work
  • I can write prompts that consistently produce useful outputs
  • I verify AI outputs before using them in my work

Strategic:

  • I can identify processes in my area that could benefit from AI
  • I can articulate the risks of an AI implementation in my domain
  • I have contributed to an AI business case or project plan

Step 5: Conduct Baseline Assessment

Execute your assessment:

Preparation:

  • Communicate purpose clearly (improvement, not evaluation)
  • Provide completion timeline
  • Ensure anonymity where appropriate
  • Brief managers on their role

Administration:

  • Allow sufficient time (surveys: 15-20 minutes max)
  • Provide support for questions
  • Track completion rates by department

For practical tests:

  • Standardise conditions
  • Use realistic scenarios relevant to actual work
  • Have clear scoring criteria defined in advance

Step 6: Analyse Gaps and Patterns

With data collected, analysis begins:

Individual gap analysis:

  • Current level vs. required level for each skill area
  • Priority gaps (high-impact roles with large deficits)

Pattern identification:

  • Common gaps across departments (indicates systemic training need)
  • Variation within roles (indicates inconsistent past training)
  • Outliers (both high performers to leverage and struggling individuals to support)

Segmentation:

  • Group employees by gap patterns, not just roles
  • "AI enthusiasts needing structure" vs. "AI sceptics needing foundation"

Step 7: Prioritise Based on Business Impact

Not all gaps are equal. Prioritise using:

Impact-Effort Matrix:

Low Effort to CloseHigh Effort to Close
High Business ImpactPriority 1 (Do first)Priority 2 (Plan carefully)
Low Business ImpactPriority 3 (Quick wins)Priority 4 (Deprioritise)

Business impact factors:

  • Role criticality to AI initiatives
  • Volume of people in similar roles
  • Revenue/cost implications of the skill gap
  • Risk implications of the skill gap

Step 8: Create Role-Specific Training Paths

Translate findings into actionable plans:

Training path components:

  • Target audience (specific roles/individuals)
  • Learning objectives (skills to be gained)
  • Delivery method (instructor-led, e-learning, coaching, on-the-job)
  • Sequence and prerequisites
  • Duration and time commitment
  • Success measures

Example training path structure:

Path A: Foundational AI Literacy (All Staff)

  1. AI Basics e-learning (2 hours)
  2. Company AI Policy workshop (1 hour)
  3. AI Ethics scenarios (1 hour)

Path B: Applied AI User (Customer Service)

  1. Complete Path A
  2. AI Tool Introduction—hands-on lab (4 hours)
  3. Prompt Engineering for Customer Service (3 hours)
  4. Supervised practice period (2 weeks)
  5. Competency verification

Common Failure Modes

1. Assessing Generic "AI Knowledge" vs. Role-Specific Skills

Testing whether someone knows what GPT stands for doesn't tell you if they can use AI effectively in their job. Assess applied capability, not trivia.

2. Skipping Business Impact Prioritisation

Addressing the largest gaps first sounds logical but isn't. A small gap in a high-impact role matters more than a large gap in a peripheral function.

3. Using One Assessment for All Roles

An executive and an analyst need different assessments. A single survey can't adequately assess both strategic thinking and technical tool skills.

4. Confusing Enthusiasm with Competence

The employee most excited about AI isn't necessarily the most skilled. And the sceptic may already be using AI effectively. Assess actual capability, not attitude alone.

5. Not Involving Managers in Assessment Design

Managers know what skills their teams actually need. HR-only design produces assessments disconnected from real work requirements.

6. Waiting for Perfect Data Before Acting

Some gaps are obvious. Don't delay addressing clear needs while perfecting your assessment methodology.

7. Treating Assessment as One-Time

AI capabilities evolve monthly. Your skills framework and assessment need regular updates—at minimum annually, preferably semi-annually.


Implementation Checklist

Pre-Assessment

  • Define assessment scope (departments, roles, timeline)
  • Document assessment objectives
  • Map roles to AI impact categories
  • Build or adopt AI skills taxonomy
  • Define expected competency levels by role
  • Select assessment methods
  • Develop assessment instruments
  • Pilot with small group
  • Brief managers on assessment purpose and their role

During Assessment

  • Communicate purpose to all participants
  • Provide clear instructions and timeline
  • Monitor completion rates
  • Provide support for questions
  • Administer practical tests where planned

Post-Assessment

  • Analyse individual and aggregate gaps
  • Identify patterns across roles and departments
  • Prioritise gaps by business impact
  • Create role-specific training recommendations
  • Validate recommendations with business leaders
  • Develop training paths and timeline
  • Set baseline for measuring training effectiveness
  • Schedule reassessment (6-12 months)

Metrics to Track

Assessment Quality Metrics

MetricTargetWhy It Matters
Assessment completion rate>85%Incomplete data = incomplete picture
Self vs. practical test correlation>0.6Validates self-assessment accuracy
Manager review completion>90%Ensures external validation

Gap Analysis Metrics

MetricTargetWhy It Matters
% of roles with defined competency requirements100%Can't assess gaps without requirements
Average gap size by tierTrack over timeMeasures progress
Gap distribution by departmentEven or justifiedIdentifies systemic issues

Outcome Metrics

MetricTargetWhy It Matters
Training recommendation acceptance>80%Indicates actionable findings
Gap closure rate at reassessment>50% of priority gapsValidates training effectiveness
Time-to-competency by roleBenchmark, then improveEfficiency measure

To understand how to measure training effectiveness after deployment, see (/insights/measuring-ai-training-roi).


Tooling Suggestions

Survey and Assessment Platforms

  • General survey tools (Microsoft Forms, Google Forms, Typeform) for basic self-assessments
  • LMS platforms with assessment features for integrated tracking
  • Dedicated skills assessment platforms for sophisticated analysis

Skills Management

  • Skills inventory platforms that can track AI competencies alongside other capabilities
  • Learning experience platforms (LXP) that recommend training based on gaps
  • Competency management modules within HRIS systems

Analysis

  • Spreadsheet tools for smaller organisations
  • Business intelligence platforms for larger-scale analysis
  • HR analytics tools for workforce planning integration

Practical Assessment

  • Sandbox AI environments for skills testing
  • Screen recording tools for evaluating AI tool usage
  • Rubric-based scoring templates

Frequently Asked Questions


Taking Action

An AI training needs assessment is the foundation for effective AI capability building. Without it, you're guessing—and guessing with training budgets rarely ends well.

The organisations seeing real returns on AI training investment are those who know exactly what skills they need, where the gaps are, and how to prioritise limited resources. Assessment provides that clarity.

Ready to assess your organisation's AI training needs systematically?

Pertama Partners helps organisations design and conduct AI training needs assessments that translate directly into effective capability building. Our AI Readiness Audit includes a comprehensive skills assessment component tailored to your roles and objectives.

Book an AI Readiness Audit →


References

  1. World Economic Forum. (2023). Future of Jobs Report 2023.
  2. LinkedIn Learning. (2024). Workplace Learning Report.
  3. McKinsey Global Institute. (2024). The State of AI in 2024.
  4. SHRM. (2023). Skills-Based Hiring and Development Guide.
  5. Gartner. (2024). Building AI Skills in the Enterprise.

Frequently Asked Questions

For a mid-sized organisation (200-500 employees), expect 4-6 weeks from scoping to recommendations. Larger organisations may need 8-12 weeks. Don't rush—but don't let perfect be the enemy of good.

References

  1. World Economic Forum. (2023). *Future of Jobs Report 2023*.. World Economic Forum *Future of Jobs Report * (2023)
  2. LinkedIn Learning. (2024). *Workplace Learning Report*.. LinkedIn Learning *Workplace Learning Report* (2024)
  3. McKinsey Global Institute. (2024). *The State of AI in 2024*.. McKinsey Global Institute *The State of AI in * (2024)
  4. SHRM. (2023). *Skills-Based Hiring and Development Guide*.. SHRM *Skills-Based Hiring and Development Guide* (2023)
  5. Gartner. (2024). *Building AI Skills in the Enterprise*.. Gartner *Building AI Skills in the Enterprise* (2024)
Michael Lansdowne Hauge

Founder & Managing Partner

Founder & Managing Partner at Pertama Partners. Founder of Pertama Group.

ai trainingskills assessmentcapability gapworkforce developmenthr strategylearning and developmentchange managementai training needs assessment templateidentifying ai skill gapsworkforce ai capability audittraining needs analysis processassessing organizational ai readinessAI skills gap assessmenttraining needs analysis frameworkworkforce capability mapping

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit