Back to Insights
AI Change Management & TrainingGuidePractitioner

Pre-Training AI Skills Assessment: Baseline Your Team

February 8, 20269 min readPertama Partners

Conduct effective pre-training AI skills assessments to establish baseline capabilities, identify learning needs, and personalize training for maximum impact.

Pre-Training AI Skills Assessment: Baseline Your Team
Part 4 of 10

AI Skills Assessment & Certification

Complete framework for assessing AI competencies and implementing certification programs. Learn how to measure AI literacy, evaluate training effectiveness, and build internal badging systems.

Practitioner

Key Takeaways

  • 1.Pre-training assessment eliminates wasted training investment by revealing baseline capabilities and enabling personalized learning paths
  • 2.Effective pre-assessment combines multiple methods—knowledge tests, self-assessment surveys, and practical demonstrations—for comprehensive baseline measurement
  • 3.Assessment timing matters: conduct 1-2 weeks before training to allow analysis and personalization without risking skill change between assessment and training
  • 4.Communicate assessment purpose clearly to maximize participation: frame as developmental tool for customization, not evaluative judgment of worth
  • 5.Use pre-assessment data for multiple purposes: learner routing, content customization, training design adjustment, and baseline establishment for measuring post-training impact

The most expensive training mistake is delivering the wrong content to the wrong people at the wrong time. Pre-training assessment eliminates this waste by revealing who knows what before investing in learning interventions.

This guide provides a practical framework for conducting pre-training AI skills assessments that establish baseline capabilities, identify learning needs, and enable personalized training for maximum impact.

Why Pre-Training Assessment Matters

Avoid the "One-Size-Fits-All" Trap

Standard training assumes everyone starts at the same place. Reality: AI literacy varies wildly within organizations. Some employees experiment with ChatGPT daily; others have never touched an AI tool.

Delivering advanced content to beginners causes frustration and disengagement. Teaching basics to proficient users wastes time and signals disrespect. Pre-assessment enables right-sized training.

Optimize Training ROI

Training is expensive: instructor time, employee time, materials, platform costs. Organizations that assess before training report:

  • 35% reduction in training time through targeted content
  • 50% higher completion rates when training matches skill levels
  • 2.5x better knowledge retention from relevant, appropriately-challenging material
  • Faster time-to-competency by focusing on actual gaps

Identify High-Risk Gaps

Some knowledge gaps pose immediate risk:

  • Employees using AI tools without understanding data privacy implications
  • Leaders making AI decisions without basic literacy
  • Customer-facing staff unable to explain AI-powered features
  • Compliance-sensitive roles lacking AI governance awareness

Pre-assessment reveals these critical gaps, enabling rapid intervention before incidents occur.

Enable Personalized Learning Paths

Modern learning platforms support adaptive experiences. Pre-assessment data feeds personalization engines that:

  • Skip content learners already know
  • Recommend specific modules addressing individual gaps
  • Adjust difficulty and pacing to learner needs
  • Provide role-relevant scenarios and examples

Timing Your Pre-Training Assessment

Optimal Windows

1-2 weeks before training: Provides time for data analysis and personalization without risking skill change

Day-of assessment: Works for just-in-time training where immediate baseline is needed

Avoid: Assessing too early (skills may change) or during training (creates fatigue)

Frequency Considerations

New hire onboarding: Always assess; experience varies dramatically

New AI tool rollout: Assess immediately before training; establishes true baseline

Ongoing development: Reassess periodically (quarterly or semi-annually) to track growth

Refresher training: Quick pulse check rather than comprehensive assessment

What to Assess Pre-Training

Core Knowledge Areas

AI Fundamentals

  • Definitions and terminology (AI, ML, LLM, generative AI)
  • Understanding of how AI systems work
  • Awareness of AI capabilities and limitations
  • Recognition of AI applications and use cases

Practical Skills

  • Current AI tool usage (if any)
  • Prompt writing ability
  • Output evaluation and critical thinking
  • Workflow integration experience

Policy and Governance

  • Awareness of organizational AI policies
  • Understanding of data privacy implications
  • Knowledge of appropriate AI use guidelines
  • Familiarity with incident reporting processes

Risk and Ethics

  • Recognition of AI-related risks
  • Understanding of bias and fairness issues
  • Awareness of compliance requirements
  • Ethical decision-making readiness

Attitudes and Mindsets

  • AI anxiety or enthusiasm levels
  • Openness to learning and change
  • Perceived relevance to role
  • Self-efficacy and confidence

Tool-Specific Assessments

For training on specific AI tools:

  • Prior experience with the tool
  • Understanding of tool-specific features
  • Awareness of integration points
  • Knowledge of tool-specific risks or policies

Pre-Assessment Methods and Instruments

Knowledge Tests

Format: Multiple-choice, true/false, or short-answer questions

Best for: Measuring factual knowledge and conceptual understanding

Sample questions:

  • "Which of the following best describes how large language models generate text?"
  • "True or False: It's safe to share customer data with public AI tools like ChatGPT"
  • "What should you do if an AI tool provides factually incorrect information?"

Design tips:

  • 10-15 questions for quick assessment
  • 20-30 questions for comprehensive baseline
  • Include "I don't know" options to reduce guessing
  • Mix difficulty levels to differentiate skill ranges
  • Use scenario-based questions over pure recall

Self-Assessment Surveys

Format: Rating scales on competency statements

Best for: Measuring perceived skills and identifying confidence levels

Sample statements:

  • "I can write effective prompts that generate useful AI outputs" (1-5 scale)
  • "I understand when AI should and shouldn't be used in my work" (1-5 scale)
  • "I feel confident troubleshooting issues with AI tools" (1-5 scale)

Design tips:

  • Include concrete examples to calibrate ratings
  • Ask about both knowledge ("I understand X") and capability ("I can do X")
  • Measure confidence separately from competence
  • Include open-ended questions about learning needs and interests

Practical Skill Demonstrations

Format: Task-based assessments with AI tools

Best for: Measuring actual capability rather than self-perception

Sample tasks:

  • "Write a prompt that generates a professional email responding to this customer complaint"
  • "Review this AI-generated report and identify any errors or concerns"
  • "Explain how you would use AI to complete [role-specific task]"

Design tips:

  • Keep tasks brief (5-10 minutes each)
  • Use realistic work scenarios
  • Provide clear evaluation rubrics
  • Consider automated scoring where possible
  • Allow multiple attempts if assessing learning readiness vs. performance

Needs Analysis Surveys

Format: Open-ended and structured questions about learning goals

Best for: Understanding motivation, perceived needs, and context

Sample questions:

  • "What AI tools or techniques are you most interested in learning?"
  • "What obstacles prevent you from using AI effectively in your work?"
  • "What specific outcomes do you hope to achieve through AI training?"

Design tips:

  • Keep surveys brief (5-10 minutes)
  • Balance open-ended exploration with structured options
  • Ask about barriers and enablers, not just skills
  • Include questions about preferred learning formats and pacing

Portfolio or Work Sample Review

Format: Examination of existing AI-related work

Best for: Assessing real-world capability and current practices

What to review:

  • Prompts employees have written
  • AI-generated content they've used
  • Documentation of AI workflows
  • Questions they've asked about AI

Design tips:

  • Request voluntary submission; don't create compliance burden
  • Look for patterns (consistent strengths or gaps)
  • Assess sophistication, not just volume
  • Use findings to inform curriculum examples

Designing Your Pre-Training Assessment

Step 1: Define Assessment Goals

What will you do with assessment data?

  • Route learners to appropriate training tracks
  • Customize content within training program
  • Identify learners needing prerequisites
  • Establish baseline for post-training comparison
  • Inform training design and emphasis

Goals shape assessment design and scope.

Step 2: Select Assessment Methods

Match methods to goals and constraints:

For large populations: Knowledge tests + self-assessment surveys (scalable, efficient)

For critical roles: Add practical demonstrations (higher validity)

For custom training: Include needs analysis surveys (informs design)

For adaptive platforms: Choose methods with quantifiable outputs (enables automation)

Most organizations combine 2-3 methods for balanced perspective.

Step 3: Develop Assessment Instruments

Knowledge Test Development:

  • Write 30-40 questions covering key topics
  • Pilot with small group to validate clarity and difficulty
  • Analyze item performance (difficulty, discrimination)
  • Refine to 15-20 strongest items

Survey Development:

  • Create competency statement list (15-20 items)
  • Use consistent rating scales (1-5 or 1-7)
  • Include calibration examples
  • Add 3-5 open-ended questions

Practical Task Development:

  • Design 2-3 representative scenarios
  • Create clear instructions and success criteria
  • Develop rubrics with specific performance indicators
  • Test for completion time and technical functionality

Step 4: Pilot and Validate

Test assessment with 10-20 representative employees:

  • Does assessment differentiate skill levels effectively?
  • Are instructions clear and unambiguous?
  • Is length appropriate (under 30 minutes total)?
  • Does it identify meaningful learning needs?
  • Are technical systems working properly?

Refine based on pilot feedback.

Step 5: Establish Cut Scores and Routing Rules

Define how assessment results translate to action:

Example routing rules:

  • Score 0-40%: Foundational track (Level 1 content)
  • Score 41-70%: Standard track (Level 2 content)
  • Score 71-100%: Advanced track (Level 3 content)

Example prerequisite rules:

  • Score <40% on governance section: Required policy module before training
  • Self-rated confidence <2 on tool usage: Additional hands-on practice lab

Clear rules enable automated personalization.

Administering Pre-Training Assessment

Communication Strategy

Position assessment effectively:

  • Frame positively: "This helps us customize training to your needs"
  • Reduce anxiety: "This is not a performance evaluation"
  • Clarify benefits: "You'll skip content you already know"
  • Set expectations: "Takes 20 minutes; be honest, results are confidential"
  • Provide context: "Everyone has different starting points; that's normal"

Logistical Setup

Platform selection:

  • LMS-integrated assessment (seamless experience)
  • Survey tools (SurveyMonkey, Qualtrics, Google Forms)
  • Specialized assessment platforms
  • Custom build for sophisticated needs

Access and accommodations:

  • Ensure accessibility compliance
  • Provide accommodations for disabilities
  • Allow adequate time for completion
  • Support multiple devices and browsers

Data privacy:

  • Clarify who sees individual results
  • Protect personally identifiable information
  • Follow organizational data governance
  • Be transparent about data use

Maximizing Participation

Mandatory vs. voluntary:

  • Make mandatory when training is required
  • Make voluntary for optional development
  • Provide completion incentive if needed

Timing and reminders:

  • Give 3-5 day window for completion
  • Send reminder at 50% and 75% of window
  • Extend deadline flexibly for extenuating circumstances

Manager engagement:

  • Equip managers to encourage participation
  • Provide talking points about benefits
  • Ask managers to allocate time during work hours

Analyzing Pre-Training Assessment Data

Individual-Level Analysis

For each learner:

  • Overall competency level: Determines primary track
  • Specific gap areas: Identifies focused interventions
  • Strengths to leverage: Enables peer teaching opportunities
  • Confidence vs. competence: Reveals Dunning-Kruger effects
  • Learning preferences: Informs delivery modality

Group-Level Analysis

Across learner cohort:

  • Skill distribution: Informs training design emphasis
  • Common gaps: Highlights universal needs
  • Variance: Indicates need for differentiation
  • Segment differences: Reveals patterns by role, department, experience

Training Design Implications

If baseline is low:

  • Add foundational content
  • Increase scaffolding and support
  • Extend training duration
  • Provide additional practice opportunities

If baseline is high:

  • Accelerate pace
  • Reduce review of basics
  • Add advanced material
  • Challenge with complex scenarios

If variance is high:

  • Create multiple tracks or modules
  • Enable self-paced progression
  • Use adaptive learning technology
  • Facilitate peer learning across levels

Using Pre-Assessment for Personalization

Adaptive Learning Paths

Branching based on assessment:

  • Low scorers → Foundational modules → Core content → Practice
  • Mid scorers → Core content → Advanced modules → Application
  • High scorers → Advanced modules → Capstone projects → Teaching others

Content Customization

Tailoring examples and scenarios:

  • Marketing role + low AI literacy → Basic AI for content creation
  • Marketing role + high AI literacy → Advanced AI marketing automation
  • Finance role + low AI literacy → Basic AI for analysis
  • Finance role + high AI literacy → Advanced AI for forecasting and modeling

Pacing and Support

Adjusting experience:

  • Struggling learners: more time, coaching, additional resources
  • Average learners: standard pace, peer learning, self-service support
  • Advanced learners: accelerated pace, challenge problems, mentoring opportunities

Communicating Assessment Results

To Learners

Provide actionable feedback:

  • Current competency level and what it means
  • Specific strengths and gaps identified
  • Recommended learning path or modules
  • Resources for pre-training preparation
  • Reassurance about learning support

Example feedback: "Your assessment shows strong understanding of AI concepts but limited practical experience with AI tools. We recommend starting with our hands-on AI Fundamentals module before joining the advanced workshop. This will build your confidence and ensure you get maximum value from the training."

To Managers

Aggregate insights without violating privacy:

  • Team readiness overview (distribution across levels)
  • Common gap areas requiring attention
  • Recommended training timeline and approach
  • Suggestions for post-training reinforcement

To Training Team

Detailed data for design decisions:

  • Competency distribution by topic area
  • Question-level performance analysis
  • Confidence and attitude data
  • Learning preference information
  • Specific curriculum recommendations

Addressing Pre-Assessment Challenges

Low Participation

Causes: Unclear value, time constraints, anxiety Solutions: Improve communication, provide work time, reduce length, offer incentives, engage managers

Gaming or Dishonesty

Causes: Fear of judgment, desire to skip training, misunderstanding Solutions: Emphasize developmental purpose, protect privacy, explain personalization benefits, remove stakes

Technical Issues

Causes: Platform problems, access barriers, poor UX Solutions: Test thoroughly, provide IT support, offer alternative formats, extend deadlines

Misaligned Results

Causes: Poor instrument design, guessing, Dunning-Kruger Solutions: Improve questions, validate against other data, combine multiple methods

Connecting Pre- and Post-Assessment

Pre-assessment sets baseline for measuring training effectiveness:

Matched assessment design: Use same or parallel instruments pre and post

Growth measurement: Calculate individual and group gains

Effectiveness analysis: Correlate pre-assessment levels with post-training gains

Continuous improvement: Use data to refine assessment and training

Conclusion

Pre-training AI skills assessment is not optional—it's essential for effective, efficient learning. Assessment reveals the true starting point, enables personalization, and establishes baseline for measuring impact.

Invest time in thoughtful assessment design: select appropriate methods, develop quality instruments, communicate clearly, and analyze data for actionable insights. The return on this investment is training that meets learners where they are and delivers measurable capability improvement.

Frequently Asked Questions

Target 15-30 minutes for most pre-training assessments. Quick assessments (10-15 minutes) work for simple training or time-constrained populations. Comprehensive assessments (30-45 minutes) suit complex training or high-stakes roles. Longer assessments reduce completion rates, so prioritize essential measurement over comprehensive coverage.

This valuable insight prevents training failure. Options: delay training and provide prerequisite learning first; create foundational track for unprepared employees; redesign training to start at appropriate level; provide pre-training resources (videos, articles) to raise baseline. Better to adjust plans than deliver ineffective training to unprepared audience.

Yes, with context. Share results that help employees understand their starting point and recommended learning path. Frame scores developmentally ("You're starting at Level 2, and training will help you reach Level 3") rather than judgmentally. Avoid comparison to others; focus on individual growth and learning support.

Carefully. If assessment demonstrates true proficiency (not just high self-ratings), exemption may be appropriate. However, consider: Is training purely knowledge-transfer or does it include policy communication, certification, or team-building? Even proficient employees may benefit from participation. Consider fast-track options rather than complete exemption.

Misalignment is common and informative. High self-rating + low test score suggests overconfidence (Dunning-Kruger effect); needs reality check and skill building. Low self-rating + high test score indicates imposter syndrome; needs confidence building. Use practical demonstrations to validate actual capability and tailor support accordingly.

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit