Why You Need an AI Use-Case Intake Process
As AI adoption grows in your organisation, the number of AI use-case ideas will multiply rapidly. Without a structured intake process, your company faces two problems:
- Good ideas get lost — Employees suggest AI applications informally, but there is no system to capture, evaluate, and act on them
- Bad ideas consume resources — Without evaluation criteria, the loudest voice or most senior requester wins, rather than the highest-value use case
An AI use-case intake process creates a fair, transparent system for capturing AI ideas from anywhere in the organisation and routing them through evaluation, prioritisation, and (if approved) implementation.
The Intake Process Overview
Stage 1: Submission
Any employee can submit an AI use-case idea through a standard intake form.
Stage 2: Initial Triage
The AI governance committee or designated reviewer conducts a quick assessment to filter out duplicates, out-of-scope requests, and clearly infeasible ideas.
Stage 3: Detailed Evaluation
Promising use cases are scored against standardised criteria covering business value, feasibility, risk, and alignment.
Stage 4: Prioritisation
Scored use cases are ranked and placed on the AI project backlog.
Stage 5: Approval and Assignment
The top-priority use cases are approved for implementation and assigned to an AI champion or project team.
Stage 6: Implementation and Review
The use case is implemented, measured, and reviewed. Learnings feed back into the process.
AI Use-Case Intake Form Template
Section 1: Submitter Information
| Field | Entry |
|---|---|
| Name | |
| Department | |
| Role | |
| Date | |
Section 2: Use Case Description
What is the current process or task? [Describe the current way this work is done, without AI]
What problem does this solve or what opportunity does it create? [Describe the pain point, inefficiency, or missed opportunity]
How would AI improve this process? [Describe specifically how AI would be used — which tool, what inputs, what outputs]
Who would benefit? [List the team, department, or stakeholders who would benefit]
How often is this task performed?
- Daily
- Weekly
- Monthly
- Ad hoc / As needed
Estimated time currently spent on this task: [Hours per week or per occurrence]
Section 3: Data and Risk
What data would be used as input to the AI tool? [Describe the data types involved]
Does this data include personal data?
- Yes
- No
- Unsure
Does this data include confidential or client data?
- Yes
- No
- Unsure
What is the impact if the AI output is incorrect?
- Low — minor inconvenience, easily corrected
- Medium — requires rework, could cause delays
- High — could cause financial loss, reputational damage, or compliance issues
- Critical — could cause harm to individuals or severe business impact
Section 4: Expected Benefits
Estimated time saved per week/month: [Hours]
Other expected benefits: [e.g. improved quality, faster turnaround, better customer experience, reduced errors]
Is this a quick win (can be implemented in 1-2 weeks) or a strategic initiative (requires 1-3 months)?
- Quick win
- Strategic initiative
Evaluation Scoring Criteria
Use these criteria to score each submitted use case on a 1-5 scale:
Business Value (Weight: 30%)
| Score | Criteria |
|---|---|
| 5 | Major time/cost savings, directly impacts revenue or customer satisfaction |
| 4 | Significant productivity improvement for a large team |
| 3 | Moderate improvement for a department |
| 2 | Minor convenience improvement |
| 1 | Nice to have, minimal measurable impact |
Feasibility (Weight: 25%)
| Score | Criteria |
|---|---|
| 5 | Can be done immediately with existing approved tools, no custom development |
| 4 | Requires minor configuration or workflow adjustment |
| 3 | Requires new tool approval or moderate setup effort |
| 2 | Requires custom development or significant integration work |
| 1 | Technically very challenging, uncertain feasibility |
Risk Level (Weight: 25%) — Inverse scoring
| Score | Criteria |
|---|---|
| 5 | No personal data, low impact if output is incorrect, no regulatory concern |
| 4 | Minimal personal data, low to medium impact, standard compliance |
| 3 | Some personal data or medium impact, requires human review |
| 2 | Significant personal data or high impact, requires careful governance |
| 1 | Critical data or impact, major regulatory considerations |
Strategic Alignment (Weight: 20%)
| Score | Criteria |
|---|---|
| 5 | Directly supports company strategic priorities and AI roadmap |
| 4 | Supports departmental goals and demonstrates AI value |
| 3 | Moderately aligned with company direction |
| 2 | Tangentially related |
| 1 | Does not align with current priorities |
Composite Score Calculation
Composite Score = (Business Value × 0.30) + (Feasibility × 0.25) + (Risk Level × 0.25) + (Alignment × 0.20)
Score ranges:
- 4.0 - 5.0: High priority — fast-track for implementation
- 3.0 - 3.9: Medium priority — add to backlog, implement when capacity allows
- 2.0 - 2.9: Low priority — reconsider in 3-6 months or when conditions change
- Below 2.0: Not recommended — provide feedback to submitter
Governance Workflow
Triage (Within 5 business days of submission)
The AI governance committee or designated reviewer:
- Checks for duplicate or similar submissions
- Confirms the use case is within scope (not already addressed by an existing tool)
- Assigns an initial priority estimate
- Communicates receipt to the submitter
Evaluation (Within 10 business days)
For use cases that pass triage:
- Score against the evaluation criteria
- Identify any governance or compliance concerns
- Estimate implementation effort and timeline
- Prepare recommendation for the governance committee
Decision (Monthly governance meeting)
The AI governance committee:
- Reviews all scored use cases
- Decides: Approve, Defer, or Reject
- Assigns approved use cases to an AI champion or project team
- Communicates decisions to all submitters
Implementation Tracking
| Field | Details |
|---|---|
| Use case ID | [AUTO-GENERATED] |
| Status | Submitted / In Triage / In Evaluation / Approved / In Progress / Completed / Deferred / Rejected |
| Assigned to | [CHAMPION OR TEAM] |
| Start date | [DATE] |
| Target completion | [DATE] |
| Actual completion | [DATE] |
| Results | [MEASURED OUTCOMES] |
Encouraging Submissions
The intake process only works if employees actually use it. To encourage submissions:
- Make it easy — Use a simple form (the template above), not a bureaucratic process
- Respond quickly — Acknowledge every submission within 2 business days
- Celebrate successes — Share implemented use cases and their results publicly
- Provide feedback — Even rejected ideas deserve an explanation
- Remove barriers — Employees should not need manager approval to submit an idea
Related Reading
- AI Evaluation Framework — Evaluate use cases with a structured quality and risk framework
- AI Vendor Approval Checklist — Approve the tools needed for each use case
- ChatGPT Approved Use Cases — Examples of approved ChatGPT use cases by department
Frequently Asked Questions
Any employee should be able to submit an AI use-case idea. The best AI applications often come from frontline staff who understand daily pain points intimately. The intake form is designed to be simple enough that anyone can complete it in 10-15 minutes.
Initial triage should happen within 5 business days of submission. Detailed evaluation takes another 5-10 business days. Final decisions are made at the monthly governance meeting. From submission to decision, expect 2-4 weeks. Quick wins can be fast-tracked.
Typically 30-50% of submitted use cases are approved for implementation. Another 20-30% are deferred (good ideas but not the right time). The remaining are rejected due to feasibility, risk, or priority issues. Providing clear feedback on rejections encourages continued submissions.
