Back to Insights
AI Change Management & TrainingFramework

AI Training Needs Assessment: How to Identify Skill Gaps

November 17, 202510 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:CHROConsultantCTO/CIOCFOCEO/FounderHead of OperationsIT Manager

Learn how to identify AI skill gaps across your organisation with a structured needs assessment approach. Includes skills matrix template, assessment methods, and implementation checklist.

Summarize and fact-check this article with:
Indonesian Facilitator - ai change management & training insights

Key Takeaways

  • 1.Conduct systematic AI training needs assessment
  • 2.Identify skill gaps across different roles and functions
  • 3.Prioritize training investments based on business impact
  • 4.Map current capabilities against AI readiness requirements
  • 5.Create targeted training plans based on assessment findings

AI Training Needs Assessment: How to Identify Skill Gaps

Your organisation is ready to embrace AI. You've heard the board's mandate, seen competitors moving, and your teams are asking questions. But before you book that "Introduction to AI" workshop for everyone, pause.

The most common AI training mistake? Delivering the same generic content to everyone, regardless of role, existing knowledge, or actual job requirements. The result: wasted budget, disengaged employees, and skills that don't translate to real work.

An AI training needs assessment changes this equation. It tells you exactly who needs what, at what level, and in what sequence, so your training investment actually moves the needle.


Executive Summary

An AI training needs assessment identifies specific skill gaps across roles before an organisation commits capital to training programs. The rationale is straightforward: generic AI training fails because skills requirements vary dramatically between executives, managers, and frontline staff. A CFO evaluating an AI vendor proposal and a customer service agent drafting responses with a chatbot occupy entirely different competency domains, yet too many programmes treat them as interchangeable learners.

Effective assessment operates across three skill categories: Foundational (understanding what AI is and is not), Applied (using AI tools productively in daily work), and Strategic (making sound decisions about AI investments and governance). Assessment methods range from self-reported surveys to practical skill tests, and the most reliable programmes combine multiple approaches to correct for self-perception bias. Central to the entire process is role mapping, the disciplined work of defining what AI competency actually means for each function in your organisation.

Once gaps are identified, they must be prioritised by business impact, not simply by the size of the deficit. A modest skills gap in a revenue-critical function will almost always warrant attention before a larger gap in a peripheral role. Crucially, assessment is not a one-time exercise. AI capabilities evolve rapidly, and your skills framework must evolve with them through regular reassessment. The ultimate output should be actionable, role-specific training paths, not generic recommendations that gather dust.


Why This Matters Now

The AI skills gap is widening, and the pace of change is accelerating. The World Economic Forum's Future of Jobs Report found that 44% of workers' core skills will change in the next five years, with AI literacy sitting at the centre of that shift. Yet most organisations are responding with broad-brush training that treats a CFO and a customer service representative as having identical learning needs.

This mismatch creates compounding problems. The first is wasted resources. Generic "AI 101" courses consume budget without building job-relevant capability, and training that fails to connect to daily work gets forgotten within weeks. The second is frustrated employees. Executives forced through basic prompt engineering exercises feel patronised, while frontline staff thrown into strategic AI discussions feel overwhelmed. Neither group receives what they actually need. The third, and most consequential, is competitive disadvantage. While your organisation delivers one-size-fits-all training, competitors are building targeted capabilities that translate directly into measurable productivity gains.

A proper needs assessment solves this by matching training to actual requirements, role by role, skill by skill.

If you're still designing your overall training strategy, see for guidance on building an effective AI training program from the ground up.


Definitions and Scope

What Is an AI Training Needs Assessment?

An AI training needs assessment is a structured process for understanding the distance between your workforce's current AI capabilities and the capabilities each role actually requires. It begins with identifying what AI-related skills your workforce currently possesses, then defines what AI skills each role genuinely needs. The core analytical work lies in mapping the gaps between current and required capabilities, then prioritising those gaps based on their business impact. The process concludes by translating findings into targeted training recommendations that connect directly to job performance.

It differs from a general AI readiness assessment (which evaluates data infrastructure, governance frameworks, and technology platforms) by focusing specifically on human capabilities.

Skills vs. Knowledge vs. Mindset

A complete assessment examines three dimensions. Knowledge refers to understanding concepts and terminology; for example, knowing what a large language model is and how it works at a functional level. Skills refer to the ability to perform tasks, such as writing effective prompts, evaluating AI outputs for accuracy, or configuring an AI tool for a specific workflow. Mindset encompasses the attitudes and approaches that determine whether knowledge and skills are actually applied: willingness to experiment, appropriate scepticism about AI outputs, and ethical awareness.

Most assessments over-index on knowledge and under-assess skills and mindset. Knowledge without application is trivia.


The AI Skills Taxonomy

Before you can assess gaps, you need a framework for what "AI competency" means. We use a three-tier model that maps to progressively deeper levels of organisational responsibility.

Tier 1: Foundational AI Skills

Foundational skills are required by nearly everyone in an AI-enabled organisation. These include understanding what AI can and cannot do, recognising AI outputs and their inherent limitations, and grasping basic AI ethics and responsible use principles. Employees at this tier should know when to trust AI outputs and when to verify them independently, and they should demonstrate awareness of the organisation's AI policies and acceptable use guidelines.

For foundational training curriculum, see on AI literacy training essentials.

Tier 2: Applied AI Skills

Applied skills are required by staff who use AI tools in their daily work. This tier encompasses prompt engineering and effective AI interaction, the ability to evaluate and improve AI outputs, and the practical know-how to integrate AI tools into existing workflows without disrupting them. Applied users also need data preparation and quality awareness, because AI outputs are only as reliable as the inputs that produce them. Finally, this tier includes tool-specific competencies for the particular applications relevant to each role.

Tier 3: Strategic AI Skills

Strategic skills are required by leaders and specialists who make consequential AI decisions on behalf of the organisation. This tier covers AI opportunity identification and use case prioritisation, AI project scoping and requirements definition, and the ability to evaluate and select vendors. It also includes AI risk assessment and governance, AI-enabled process redesign, and the financial discipline of ROI measurement and business case development.

For executive-specific training considerations, see.


AI Skills Matrix Template

Use this matrix to define expected competencies by function. Adapt levels to your organisation.

Role CategoryFoundationalAppliedStrategic
Executive LeadershipProficientAwarenessExpert
Middle ManagementProficientProficientCompetent
Technical SpecialistsExpertExpertProficient
Business AnalystsProficientExpertCompetent
Frontline StaffCompetentCompetentAwareness
Support FunctionsCompetentCompetentAwareness

The proficiency levels follow a clear progression. Awareness means the individual understands the concept but cannot apply it independently. Competent indicates the ability to apply the skill with guidance or reference materials. Proficient describes someone who can apply the skill independently and troubleshoot issues as they arise. Expert denotes the ability to teach others and handle novel, unstructured situations.


Step-by-Step Assessment Process

Step 1: Define Assessment Scope and Objectives

Start by clarifying what you are trying to achieve and where the boundaries lie. On scope, determine which departments or roles fall within the assessment, whether you are assessing for current AI tools or future capabilities the organisation plans to adopt, what timeline governs assessment completion, and who ultimately owns the assessment results and is accountable for acting on them.

On objectives, be specific about intended outcomes. You might be identifying training needs for an upcoming AI tool deployment, building a baseline against which to measure future training effectiveness, justifying a training budget with quantified gap data, or prioritising limited training resources across competing demands.

Document your scope and objectives before proceeding. This prevents scope creep and ensures stakeholder alignment from the outset.

Step 2: Map Roles to AI Impact Categories

Not all roles are equally affected by AI, and your assessment effort should reflect that reality. High AI Impact roles are those where AI will fundamentally change daily work; customer service, content creation, data analysis, and legal research fall squarely into this category. Medium AI Impact roles are those where AI will augment but not transform work, such as project management, HR business partners, and account management. Low AI Impact roles involve limited AI interaction in the near term, including facilities management and manual trades (though this boundary is shifting faster than most organisations expect).

This mapping serves a dual purpose: it helps you prioritise assessment effort and directs training investment toward the roles where returns will be greatest.

Step 3: Select Assessment Methodology

Choose methods appropriate to your scale and objectives. Self-assessment surveys work best for establishing a large-scale baseline. They are fast, low cost, and can cover the entire organisation, though they are subject to self-perception bias. Manager evaluations add an external perspective that helps validate self-assessments, though managers themselves may lack sufficient AI knowledge to evaluate accurately. Practical skill tests verify actual capability with objectivity and accuracy, but they are time-intensive to administer. Scenario-based assessments test applied thinking and judgment in realistic contexts, though they require careful design to be meaningful. Focus groups generate rich qualitative data about context and barriers, but they work with small samples and are difficult to scale.

The most reliable approach uses self-assessment for a broad baseline, supplemented with practical tests for high-impact roles where accuracy matters most.

Step 4: Develop Assessment Instruments

The quality of your assessment depends entirely on the quality of your instruments. For self-assessment surveys, use behavioural indicators rather than abstract self-ratings of competence. A question like "Rate your AI knowledge on a scale of 1 to 5" tells you very little. A question like "I can identify three appropriate use cases for AI in my role: Yes/No/Unsure" reveals actual capability.

Sample self-assessment questions should span all three tiers. At the Foundational level, ask whether the individual can explain what generative AI is to a colleague who has never used it, whether they understand the organisation's AI acceptable use policy, and whether they can identify when AI output might be inaccurate or biased. At the Applied level, probe whether they use AI tools at least weekly in their work, whether they can write prompts that consistently produce useful outputs, and whether they verify AI outputs before incorporating them into deliverables. At the Strategic level, assess whether they can identify processes in their area that could benefit from AI, whether they can articulate the risks of an AI implementation in their domain, and whether they have contributed to an AI business case or project plan.

Step 5: Conduct Baseline Assessment

Execution requires careful preparation. Communicate the purpose clearly to all participants, emphasising that the assessment is designed for improvement, not performance evaluation. Provide a completion timeline, ensure anonymity where appropriate, and brief managers on their specific role in the process.

During administration, allow sufficient time for completion (surveys should take no more than 15 to 20 minutes). Provide support for questions that arise, and track completion rates by department to identify pockets of non-participation early. For practical tests, standardise conditions across all participants, use realistic scenarios drawn from actual work contexts, and define clear scoring criteria in advance so that evaluation is consistent and defensible.

Step 6: Analyse Gaps and Patterns

With data collected, the analytical work begins on three levels. Individual gap analysis compares each person's current level against the required level for their role across each skill area, flagging priority gaps where high-impact roles show large deficits. Pattern identification looks across the organisation for common gaps that appear in multiple departments (indicating a systemic training need), variation within the same role type (indicating inconsistent past training), and outliers at both ends of the spectrum, from high performers who can be leveraged as internal champions to struggling individuals who need targeted support.

Segmentation is where the analysis becomes truly actionable. Group employees by gap patterns rather than organisational chart alone. The distinction between "AI enthusiasts who need structure and governance awareness" and "AI sceptics who need foundational confidence-building" matters far more for training design than departmental labels.

Step 7: Prioritise Based on Business Impact

Not all gaps are equal, and addressing the largest gaps first sounds logical but is not always correct. Use an Impact-Effort Matrix to guide prioritisation. Gaps that sit at the intersection of high business impact and low effort to close are your first priority. Gaps with high business impact but high effort to close warrant careful planning as a second priority. Low-impact, low-effort gaps represent quick wins worth capturing as a third priority. Low-impact, high-effort gaps should be deprioritised or deferred entirely.

The business impact assessment itself should weigh several factors: how critical the role is to current AI initiatives, the volume of people occupying similar roles (which determines the scale of training investment), the revenue or cost implications of leaving the gap unaddressed, and the risk implications of skill deficiencies in that area.

Step 8: Create Role-Specific Training Paths

The final step translates findings into actionable plans. Each training path should specify the target audience (by role or individual), the learning objectives expressed as skills to be gained, the delivery method (whether instructor-led, e-learning, coaching, or on-the-job practice), the sequence and prerequisites, the duration and time commitment expected, and the success measures that will indicate whether the training achieved its purpose.

A practical example illustrates the structure. Path A, designed for Foundational AI Literacy across all staff, might comprise an AI Basics e-learning module (2 hours), followed by a Company AI Policy workshop (1 hour), and concluded with an AI Ethics scenario exercise (1 hour). Path B, designed for Applied AI Users in Customer Service, would begin with completion of Path A as a prerequisite, then progress through an AI Tool Introduction hands-on lab (4 hours), a Prompt Engineering for Customer Service workshop (3 hours), a supervised practice period of two weeks, and a formal competency verification.


Common Failure Modes

1. Assessing Generic "AI Knowledge" vs. Role-Specific Skills

Testing whether someone knows what GPT stands for tells you nothing about whether they can use AI effectively in their job. The distinction between knowledge recall and applied capability is critical. Assess what people can do, not what they can define.

2. Skipping Business Impact Prioritisation

Addressing the largest gaps first sounds logical but often misdirects resources. A small gap in a high-impact, revenue-generating role matters considerably more than a large gap in a peripheral function. Without impact prioritisation, training budgets flow toward the loudest deficits rather than the most consequential ones.

3. Using One Assessment for All Roles

An executive and an analyst operate in fundamentally different competency domains. A single survey instrument cannot adequately assess both strategic thinking and technical tool proficiency. Assessment instruments must be tailored to the tier of competency being evaluated.

4. Confusing Enthusiasm with Competence

The employee most excited about AI is not necessarily the most skilled. Conversely, the sceptic may already be using AI tools effectively and quietly. Assess actual capability, not attitude alone. Enthusiasm is valuable, but it is not a proxy for proficiency.

5. Not Involving Managers in Assessment Design

Managers possess direct knowledge of what skills their teams actually need day to day. When HR designs assessments in isolation, the result is often an instrument disconnected from real work requirements. Manager input grounds the assessment in operational reality.

6. Waiting for Perfect Data Before Acting

Some gaps are obvious from the earliest data collection. Do not delay addressing clear, well-understood needs while perfecting your assessment methodology. The pursuit of analytical completeness can itself become a barrier to action.

7. Treating Assessment as One-Time

AI capabilities evolve monthly. The skills framework that was accurate six months ago may already contain gaps and obsolete requirements. Build in regular updates to your assessment, at minimum annually and preferably semi-annually, to keep pace with the technology your workforce is expected to use.


Implementation Checklist

Pre-Assessment

  • Define assessment scope (departments, roles, timeline)
  • Document assessment objectives
  • Map roles to AI impact categories
  • Build or adopt AI skills taxonomy
  • Define expected competency levels by role
  • Select assessment methods
  • Develop assessment instruments
  • Pilot with small group
  • Brief managers on assessment purpose and their role

During Assessment

  • Communicate purpose to all participants
  • Provide clear instructions and timeline
  • Monitor completion rates
  • Provide support for questions
  • Administer practical tests where planned

Post-Assessment

  • Analyse individual and aggregate gaps
  • Identify patterns across roles and departments
  • Prioritise gaps by business impact
  • Create role-specific training recommendations
  • Validate recommendations with business leaders
  • Develop training paths and timeline
  • Set baseline for measuring training effectiveness
  • Schedule reassessment (6-12 months)

Metrics to Track

Assessment Quality Metrics

Three metrics determine whether your assessment itself is sound. Assessment completion rate should target above 85%; incomplete data produces an incomplete picture of organisational capability. Self-assessment versus practical test correlation should exceed 0.6, which validates that employees' self-perceptions align reasonably with demonstrated ability. Manager review completion should exceed 90% to ensure that external validation complements self-reported data.

Gap Analysis Metrics

The analytical output of your assessment should be measured on three dimensions. 100% of roles should have defined competency requirements, because gaps cannot be measured without a clear target state. Average gap size by tier should be tracked over time as the primary measure of whether training is closing the distance between current and required capability. Gap distribution by department should be either even or justified by legitimate differences in AI exposure; unexplained variation signals systemic issues in how training or hiring has been managed.

Outcome Metrics

Ultimately, the assessment must produce results that the organisation acts on. Training recommendation acceptance should exceed 80%, indicating that business leaders find the findings actionable and credible. Gap closure rate at reassessment should reach at least 50% of priority gaps, validating that the training delivered in response to the assessment actually worked. Time-to-competency by role should be benchmarked at first measurement and then improved over subsequent cycles as the organisation refines its training delivery.

To understand how to measure training effectiveness after deployment, see.


Tooling Suggestions

Survey and Assessment Platforms

For basic self-assessments, general survey tools such as Microsoft Forms, Google Forms, and Typeform provide sufficient functionality at low cost. Organisations seeking integrated tracking should consider LMS platforms with built-in assessment features, which allow training delivery and gap measurement to operate within a single system. For more sophisticated analysis, including adaptive questioning and automated gap scoring, dedicated skills assessment platforms offer the most robust capabilities.

Skills Management

Ongoing skills tracking requires platforms that can monitor AI competencies alongside other workforce capabilities. Learning experience platforms (LXPs) add value by recommending training based on identified gaps, creating a feedback loop between assessment and development. Many HRIS systems also include competency management modules that can be configured for AI-specific tracking without requiring a separate tool.

Analysis

The right analysis tool depends on organisational scale. Spreadsheet tools are sufficient for smaller organisations with straightforward assessment data. Larger-scale analysis benefits from business intelligence platforms that can handle complex segmentation and trend analysis. For organisations seeking to integrate skills data into broader workforce planning, HR analytics tools provide the most comprehensive view.

Practical Assessment

Verifying applied AI skills requires environments where employees can demonstrate actual capability. Sandbox AI environments allow controlled skills testing without risk to production systems. Screen recording tools enable evaluators to observe how employees interact with AI tools in realistic scenarios. Rubric-based scoring templates ensure consistency across evaluators and assessment sessions.


Taking Action

An AI training needs assessment is the foundation for effective AI capability building. Without it, you are guessing, and guessing with training budgets rarely ends well.

The organisations seeing real returns on AI training investment are those who know exactly what skills they need, where the gaps are, and how to prioritise limited resources. Assessment provides that clarity.

Ready to assess your organisation's AI training needs systematically?

Pertama Partners helps organisations design and conduct AI training needs assessments that translate directly into effective capability building. Our AI Readiness Audit includes a comprehensive skills assessment component tailored to your roles and objectives.

Book an AI Readiness Audit →


Common Questions

For large organizations, conduct the assessment in four phases: first, survey all employees to establish baseline AI literacy and identify self-reported skill gaps across departments. Second, conduct structured interviews with department heads to understand how AI could transform their specific workflows and what skills their teams need. Third, benchmark against industry standards and competitor capabilities to identify strategic skill gaps. Fourth, map findings to a role-based training matrix that categorizes employees into tiers (awareness, practitioner, specialist) and assigns appropriate training programs with measurable competency outcomes.

AI awareness training provides a conceptual understanding of what AI is, its capabilities and limitations, and its implications for the industry. It is suitable for all employees and typically takes 2 to 4 hours. AI skills training teaches specific technical or applied competencies such as prompt engineering, using AI tools for data analysis, building automated workflows, or evaluating AI vendor solutions. Skills training is role-specific, takes 1 to 5 days depending on depth, and includes hands-on practice with relevant tools. Most organizations need both: universal awareness training plus targeted skills training for roles directly impacted by AI.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Training Subsidies for Employers — SkillsFuture for Business. SkillsFuture Singapore (2024). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. Enterprise Development Grant (EDG) — Enterprise Singapore. Enterprise Singapore (2024). View source
  6. What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
  7. OECD Principles on Artificial Intelligence. OECD (2019). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Change Management & Training Solutions

INSIGHTS

Related reading

Talk to Us About AI Change Management & Training

We work with organizations across Southeast Asia on ai change management & training programs. Let us know what you are working on.