Back to Insights
AI Governance & Risk ManagementPlaybookPractitioner

AI Approval Workflow: Designing Governance Processes

January 25, 202611 min readMichael Lansdowne Hauge
For:Governance OfficersProject ManagersRisk Teams

Design effective AI approval workflows that balance governance rigor with operational speed. Includes templates and decision frameworks.

Finance Compliance Review - ai governance & risk management insights

Key Takeaways

  • 1.Effective AI approval workflows balance governance rigor with operational speed
  • 2.Design tiered approval levels based on risk classification—not all AI needs the same scrutiny
  • 3.Include clear escalation paths and decision criteria to avoid bottlenecks
  • 4.Automate low-risk approvals while maintaining human oversight for high-stakes decisions
  • 5.Build in feedback mechanisms to continuously improve workflow efficiency

"We need an AI approval process" usually means one of two things: either nothing gets approved (bureaucratic gridlock) or everything gets approved (rubber stamp). Neither serves governance objectives.

Effective AI approval workflows balance risk management with business agility. This guide shows how to design approval processes that protect the organization without blocking legitimate AI innovation.


Executive Summary

  • One-size-fits-all approval doesn't work—low-risk AI needs different treatment than high-risk AI
  • Tiered approval matches effort to risk—simple approvals for low risk, thorough review for high risk
  • Clear criteria prevent subjective bottlenecks—define what triggers each tier objectively
  • Process transparency builds trust—requesters who understand requirements navigate approval faster
  • Cycle time matters—if approval takes longer than shadow deployment, you've lost control
  • Documentation serves future decisions—approval records inform ongoing governance
  • Exception handling must exist—rigid processes break under real-world pressure

Why This Matters Now

AI governance is maturing from policies to operations:

Policy implementation gap. Organizations have AI policies; many lack processes to enforce them.

Shadow AI risk. Difficult approval processes drive AI underground. Business units find workarounds.

Audit expectations. Internal audit and external assessors expect documented approval trails for AI deployments.

Accountability requirements. When AI causes problems, "who approved this?" is the first question. You need an answer.


Definitions and Scope

AI approval workflow: The process by which AI initiatives receive organizational authorization to proceed through development, deployment, and operation.

Approval scope:

  • New AI system deployments
  • Significant changes to existing AI systems
  • AI vendor/tool procurement
  • AI feature activation in existing software
  • AI pilots and proofs of concept

Workflow components:

ComponentPurpose
Request intakeStandardized information capture
Risk classificationDetermine approval tier
Review and assessmentEvaluate against criteria
Approval decisionAuthorize, reject, or require changes
DocumentationRecord decision and rationale
Monitoring handoffConnect approval to ongoing oversight

RACI Example: AI Approval Workflow

ActivityRequesterAI System OwnerIT SecurityRisk/ComplianceAI Governance Committee
Submit requestR/ACIII
Initial screeningIRCCI
Risk classificationCRCAI
Technical reviewICR/AII
Compliance reviewCCIR/AI
Tier 1 approvalIACCI
Tier 2 approvalICCCR/A
Tier 3 approvalICCRA
DocumentationRACCI
Monitoring setupIRCAI

R = Responsible, A = Accountable, C = Consulted, I = Informed


Step-by-Step Implementation Guide

Phase 1: Design the Framework (Weeks 1-2)

Step 1: Define approval scope

Clarify what requires approval:

  • All new AI deployments
  • Significant changes to existing AI (define "significant")
  • AI vendors and procurement
  • Activation of AI features in existing tools
  • Pilots and experiments (possibly lighter process)

Clarify exclusions:

  • Personal use of publicly available AI (covered by AUP)
  • Minor configuration changes
  • Feature updates from existing vendors (covered by vendor management)

Step 2: Establish approval tiers

Create risk-based tiers:

Tier 1: Streamlined Approval

  • Low-risk AI applications
  • Standard safeguards sufficient
  • Approval authority: AI System Owner + IT Security sign-off
  • Target cycle time: 5 business days

Tier 2: Standard Approval

  • Medium-risk AI applications
  • Enhanced review required
  • Approval authority: AI Governance Committee
  • Target cycle time: 15 business days

Tier 3: Executive Approval

  • High-risk AI applications
  • Comprehensive assessment required
  • Approval authority: AI Governance Committee + Executive/Board
  • Target cycle time: 30 business days

Step 3: Define tier classification criteria

Objective criteria for risk classification:

FactorLow Risk (Tier 1)Medium Risk (Tier 2)High Risk (Tier 3)
Data sensitivityPublic/internalConfidentialHighly sensitive/regulated
Decision impactAdvisory onlyInfluences decisionsMakes decisions
Affected populationInternal onlyLimited externalBroad external
ReversibilityEasily reversedReversible with effortDifficult/impossible to reverse
Regulatory scopeNo specific regulationGeneral complianceSpecific AI/sector regulation

Phase 2: Design the Process (Weeks 3-4)

Step 4: Create request intake

Standardize request information:

Basic Information:

  • Initiative name and description
  • Business sponsor and system owner
  • Intended deployment date
  • Vendor/technology involved

Risk Classification Inputs:

  • Data types processed
  • Decision types supported/made
  • User/stakeholder population
  • Integration points
  • Regulatory considerations

Supporting Documentation:

  • Business case
  • Technical architecture
  • Data protection impact assessment (if applicable)
  • Vendor security assessment (if applicable)

Step 5: Design review process

For each tier, define reviews:

Tier 1 Reviews:

  • Technical feasibility (IT)
  • Security baseline (IT Security)
  • Policy compliance (Self-attestation with spot-checks)

Tier 2 Reviews (add to Tier 1):

  • Risk assessment (Risk/Compliance)
  • Data protection review (DPO)
  • Stakeholder impact assessment
  • AI Governance Committee review

Tier 3 Reviews (add to Tier 2):

  • External expert review (if needed)
  • Executive briefing
  • Board notification/approval

Step 6: Establish decision criteria

Define what approvers evaluate:

CriterionAssessment Question
Strategic alignmentDoes this support business objectives?
Risk proportionalityAre risks appropriate for expected benefits?
Control adequacyAre safeguards sufficient for risk level?
Compliance statusDoes this meet regulatory requirements?
Operational readinessCan we operate this responsibly?
Resource availabilityDo we have capacity to implement and maintain?

Phase 3: Build Supporting Elements (Weeks 5-6)

Step 7: Create documentation templates

Standardize records:

  • Request form template
  • Risk classification checklist
  • Review assessment forms
  • Approval decision record
  • Conditions and follow-up tracker

Step 8: Design exception process

Not everything fits standard process:

Exception Types:

  • Expedited approval (urgent business need, risk acknowledged)
  • Conditional approval (proceed with additional controls)
  • Pilot exception (limited scope, defined evaluation period)

Exception Requirements:

  • Written justification
  • Risk acknowledgment
  • Compensating controls
  • Defined scope and duration
  • Senior approval authority
  • Monitoring requirements

Step 9: Establish escalation paths

When process breaks down:

  • Requester disagrees with classification
  • Reviewers disagree on assessment
  • Approval decision contested
  • Emergency deployment needed

Define who resolves each scenario.

Phase 4: Implement and Iterate (Weeks 7-10)

Step 10: Pilot the process

Test with real requests:

  • Select 3-5 pending AI initiatives
  • Run through new process
  • Time each stage
  • Gather feedback from participants

Step 11: Refine based on pilot

Common adjustments:

  • Clarify classification criteria
  • Streamline documentation requirements
  • Adjust approval authorities
  • Improve request intake
  • Add missing decision criteria

Step 12: Launch and communicate

Rollout activities:

  • Announce process to organization
  • Train requesters on intake
  • Train reviewers on assessment
  • Train approvers on decision-making
  • Publish process documentation

Common Failure Modes

All AI treated the same. Applying heavy process to low-risk AI creates delays and shadow deployment.

Classification ambiguity. Subjective risk determination creates inconsistency and disputes. Use objective criteria.

Review without decision authority. Reviewers provide input but no one decides. Clarify who approves.

Cycle time creep. Each reviewer adds a little time; total exceeds business tolerance. Set and enforce cycle time targets.

Documentation burden. Excessive paperwork deters legitimate requests. Right-size documentation to risk.

Exception abuse. Every request becomes an exception. Limit exception authority and track exception rates.


Checklist: AI Approval Workflow Implementation

□ Approval scope defined (what requires approval)
□ Approval tiers established (risk-based)
□ Classification criteria documented (objective)
□ Request intake form created
□ Review processes defined for each tier
□ Decision criteria established
□ Approval authorities assigned
□ Cycle time targets set
□ Exception process designed
□ Escalation paths defined
□ Documentation templates created
□ Process piloted with real requests
□ Refinements made based on pilot
□ Training provided to stakeholders
□ Process published and communicated
□ Metrics tracking established

Metrics to Track

Process efficiency:

  • Average cycle time by tier
  • Requests completed within target
  • Requests pending > 30 days

Process quality:

  • Rework rate (requests sent back)
  • Exception rate
  • Appeals/escalations

Governance effectiveness:

  • Approved AI with documented trails
  • Post-approval issues identified
  • Shadow AI discovered

Tooling Suggestions

Request management:

  • Workflow automation platforms
  • IT service management tools
  • GRC (governance, risk, compliance) platforms

Documentation:

  • Document management systems
  • Collaboration platforms
  • Approval tracking databases

Integration:

  • Links to IT inventory
  • Links to vendor management
  • Links to risk register

Frequently Asked Questions

Q: How fast should approval be? A: Tier 1: 5 days. Tier 2: 15 days. Tier 3: 30 days. Faster is better if quality is maintained.

Q: What if business can't wait for approval? A: Design expedited path for genuine emergencies. Track usage and root cause—frequent emergencies indicate process problems.

Q: Should pilots require full approval? A: Lighter approval for pilots is reasonable, but not zero approval. Define pilot parameters: limited data, limited users, defined duration.

Q: How do we handle vendor AI updates? A: Vendor management process should flag significant changes for assessment. Not every update needs full approval—define materiality thresholds.

Q: What about AI embedded in standard software? A: Procurement/vendor process should assess AI capabilities. Activating AI features in approved software may need lighter review depending on risk.

Q: Who breaks ties when reviewers disagree? A: Define in escalation path. Usually the next level of approval authority. Don't leave disputes unresolved.

Q: How do we track approved AI over time? A: Approval creates inventory entry. Connect to monitoring process. Annual recertification for higher-risk systems.


Govern AI Without Gridlock

Effective AI approval workflows protect the organization while enabling responsible innovation. The goal isn't fewer AI deployments—it's better AI deployments with appropriate oversight and documented accountability.

Book an AI Readiness Audit to assess your current AI governance, design approval workflows appropriate to your risk profile, and build processes that work in practice.

[Book an AI Readiness Audit →]


References

  1. ISACA. (2024). Governance of AI Systems.
  2. NIST. (2023). AI Risk Management Framework.
  3. ISO/IEC 42001:2023. AI Management System Requirements.
  4. Gartner. (2024). Building Effective AI Governance Processes.

Frequently Asked Questions

Use tiered approval based on risk classification. Automate low-risk approvals, establish clear criteria to avoid subjective delays, and track metrics to identify bottlenecks.

Typical levels: self-service for low-risk, manager approval for medium, committee review for high-risk. Criteria should be clear and consistently applied.

Risk-based governance applies more scrutiny to higher-risk AI while enabling faster progress on lower-risk applications. Not all AI needs the same approval process.

References

  1. ISACA. (2024). Governance of AI Systems.. ISACA Governance of AI Systems (2024)
  2. NIST. (2023). AI Risk Management Framework.. NIST AI Risk Management Framework (2023)
  3. ISO/IEC 42001:2023. AI Management System Requirements.. ISO/IEC AI Management System Requirements (2023)
  4. Gartner. (2024). Building Effective AI Governance Processes.. Gartner Building Effective AI Governance Processes (2024)
Michael Lansdowne Hauge

Founder & Managing Partner

Founder & Managing Partner at Pertama Partners. Founder of Pertama Group.

ai governanceapproval workflowai processgovernance designauthorizationai approval process designai governance workflow templateai project authorization

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit