Back to Insights
AI Governance & Risk ManagementChecklist

AI Model Inventory: How to Document and Track Your AI Systems

December 28, 202512 min readMichael Lansdowne Hauge
For:ConsultantCTO/CIOHead of OperationsData Science/MLBoard MemberCISOIT ManagerCHRO

Build foundational AI governance with a comprehensive model inventory. Policy template for registration requirements, implementation guide, and discovery methods.

Summarize and fact-check this article with:
Muslim Woman Ceo Hijab - ai governance & risk management insights

Key Takeaways

  • 1.Comprehensive AI inventory is foundational for governance, risk management, and compliance
  • 2.Shadow AI discovery identifies ungoverned systems that create risk
  • 3.Inventory should capture business context not just technical specifications
  • 4.Regular inventory updates reflect the dynamic nature of AI deployments
  • 5.Inventory enables impact analysis when regulations or policies change

Ask most organizations "What AI are you running?" and you'll get uncertain answers. Marketing uses a chatbot. Finance has some predictive tools. IT doesn't know what teams have signed up for. A department head bought something last month. This shadow AI problem is growing, and it creates real governance and risk management challenges.

An AI model inventory solves this by systematically documenting what AI systems exist, what they do, and who's responsible for them. This guide shows you how to build and maintain one.


Executive Summary

An AI model inventory is a comprehensive register of all AI and ML systems in use across an organization, both built and bought. It captures key fields such as model purpose, data inputs, risk level, owner, status, and review dates.

Many regulatory frameworks now require organizations to know what AI they are using, making this inventory a compliance necessity rather than a nice-to-have. The benefits span visibility, risk management, audit readiness, and stronger governance overall.

Implementation can start simple with a spreadsheet and mature into dedicated platforms over time. Critically, the scope should include third-party AI embedded in existing tools, which is often the largest category organizations overlook.


Why This Matters Now

Shadow AI is proliferating across organizations at an accelerating pace. Teams sign up for AI tools without IT or governance awareness. ChatGPT, embedded AI features in SaaS products, and department-level purchases create sprawl that no single team can track manually.

Regulators are asking pointed questions. "What AI do you use?" is becoming a standard regulatory inquiry, and organizations that cannot answer clearly face increased scrutiny and potential enforcement action.

The fundamental challenge is straightforward: you cannot manage risks you don't know about. Each AI system carries risks around data privacy, bias, accuracy, and security. Without an inventory, you are blind to your exposure.

Third-party AI compounds this problem because it is everywhere. That CRM has AI features. That analytics platform runs machine learning under the hood. Your AI footprint is almost certainly larger than you think.


Definitions and Scope

AI Model vs. AI System

An AI Model is the core algorithm or machine learning component, often referred to in technical contexts. An AI System is the broader application that includes the model plus data, interfaces, integrations, and business processes. The system-level view is more useful for governance purposes.

For inventory purposes, focus on AI systems rather than individual models. Systems represent what people actually use and interact with day to day.

What Counts as "AI" for Inventory Purposes?

This question is harder than it seems, and the answer requires an explicit organizational definition.

Custom-built machine learning models, purchased AI/ML software, and SaaS products with significant AI features should typically fall in scope. Generative AI tools like ChatGPT and image generators belong here, as do AI-powered automation and predictive analytics that uses ML.

Some categories occupy a gray area that each organization must resolve for itself. Simple rule-based automation without ML, statistical models without a learning component, AI features embedded in larger products like grammar check in Office, and personal AI use by individuals could go either way depending on your governance needs.

The best recommendation is to start broad and then adjust. It is far easier to exclude items later than to discover you missed something important during an audit.

First-Party vs. Third-Party AI

First-party AI is what you build or have built for you. You control the model, data, and deployment. Third-party AI is built into products you purchase. You use it but don't control how it works internally.

Both categories need to appear in your inventory. Third-party AI is often the larger category and is frequently the one that gets overlooked entirely.


Step-by-Step Implementation Guide

Phase 1: Define Scope and Criteria (Week 1)

Before discovering AI, decide what you're looking for. You need to resolve several scope decisions upfront: what definition of "AI" will you use, will you include embedded AI features, is there a minimum significance threshold such as only AI used by more than five people, and will you track personal AI tool use.

Document your definition clearly so that discovery efforts across departments remain consistent.

Phase 2: Discovery, Find Existing AI Systems (Week 2-4)

Systematically identify AI already in use through multiple discovery methods working in parallel.

IT and procurement records provide the first layer of visibility. Review software purchase records, cloud service subscriptions, security assessments of vendors, and output from shadow IT discovery tools. These records often reveal AI purchases that governance teams never heard about.

Surveys and interviews fill the gaps that records miss. Ask department heads what AI tools their teams use. Ask IT what tools have AI or ML features. Talk to power users about what they use for analysis and automation. These conversations frequently surface tools that never went through formal procurement.

Technical discovery adds another dimension through cloud resource audits examining AI service usage, API logs showing calls to AI services, and network traffic analysis revealing connections to AI platforms.

Vendor review rounds out the picture. Review feature lists of your existing software stack and ask vendors directly whether they use AI or ML in their products. Many vendors have quietly added AI capabilities that customers never explicitly opted into.

A thorough discovery effort should cover IT and Security for known applications, Procurement for purchases, Finance for expense reports showing tool subscriptions, surveys to department heads, surveys to technical teams, and a review of major vendor feature sets.

Phase 3: Design Inventory Schema (Week 2-3)

The information you capture for each AI system determines how useful your inventory will be for governance. The essential fields form the backbone of effective tracking.

FieldDescriptionExample
System NameCommon name"Customer Churn Predictor"
System IDUnique identifierAI-2024-001
DescriptionWhat it does, in business terms"Predicts which customers are likely to cancel"
TypeCategory of AIPredictive model
First/Third PartyBuilt or boughtThird-party (SaaS)
VendorIf third-partyAcme Analytics Inc.
OwnerAccountable personJane Doe, VP Customer Success
Technical ContactFor operational issuesJohn Smith, IT
Business UnitDepartment using itCustomer Success
StatusOperational stateProduction
Go-Live DateWhen deployed2024-03-15
Data InputsWhat data it usesCustomer transactions, behavior data
Data ClassificationSensitivity levelConfidential
Risk LevelOverall risk assessmentMedium
Last Review DateWhen last assessed2024-06-01
Next Review DateWhen next due2024-12-01

Beyond these essentials, optional fields add significant value as your inventory matures. Integration points, user count, decision authority (advisory vs. automated), compliance status, related policies, and incident history all strengthen governance over time.

Phase 4: Populate Initial Inventory (Week 4-5)

Enter discovered AI systems into your inventory by completing all required fields for each system, assigning an owner (which may require escalation), determining an initial risk level, and setting a review schedule.

Three common challenges emerge during population. When no one owns a particular system, assign an interim owner and escalate through governance channels. When nobody knows what data a system uses, investigate directly or flag it as unknown for priority follow-up. When there is ambiguity about whether something qualifies as AI, apply your documented definition and include it when in doubt.

Phase 5: Establish Registration Process (Week 5-6)

Create a clear process for adding new AI to the inventory going forward. Registration should be triggered by new AI tool procurement, development of new AI capability, discovery of previously unknown AI, or significant changes to an existing AI system.

The registration workflow moves through five stages: the requester completes a registration form, the governance team conducts an initial review, a risk assessment is performed if warranted, the system is added to the inventory, and relevant stakeholders receive notification.

One principle matters above all others here: make it easy. If registration is burdensome, people will not do it, and your inventory will become stale within months.

Phase 6: Connect to Risk Assessment Workflow (Week 6+)

The inventory enables risk management only when it is connected to action. New registrations should trigger risk assessments automatically. Review schedules should be risk-based, with higher-risk systems reviewed more frequently. The inventory should feed compliance reporting directly, and incident response teams should use the inventory for impact assessment when issues arise.


Policy Template: AI System Registration Requirements

AI SYSTEM REGISTRATION POLICY

1. PURPOSE
This policy establishes requirements for registering AI systems in the
organizational AI inventory to enable appropriate oversight and governance.

2. SCOPE
This policy applies to all AI systems used by [Organization] employees
and contractors, including:
- Custom-built AI/ML models
- Purchased AI software
- SaaS products with significant AI capabilities
- Generative AI tools used for business purposes
- Third-party AI embedded in business tools

3. DEFINITION OF AI SYSTEMS
For purposes of this policy, an AI system is defined as any software
that uses machine learning, neural networks, natural language processing,
or similar techniques to make predictions, generate content, or automate
decisions that would otherwise require human judgment.

4. REGISTRATION REQUIREMENTS
4.1 All AI systems must be registered in the AI inventory before
    production deployment.
4.2 Registration must include all required fields as defined in
    the inventory schema.
4.3 Each AI system must have an assigned owner accountable for
    its responsible use.

5. TIMELINE
5.1 New AI systems: Register before deployment
5.2 Existing AI systems: Register within [30 days] of policy effective date
5.3 Changed AI systems: Update registration within [7 days] of
    material change

6. EXEMPTIONS
[Organization] may exempt certain low-risk AI uses from registration
requirements. Exemptions must be approved by [Governance Committee]
and documented.

7. NON-COMPLIANCE
Failure to register AI systems may result in:
- Required cessation of AI system use
- Disciplinary action per applicable policies
- Required remediation and review

8. REVIEW
This policy will be reviewed annually and updated as needed.

Common Failure Modes

Failure 1: Definition Too Narrow

The symptom is an inventory showing 5 AI systems when the organization actually uses 50. This happens when teams only count custom-built ML and miss third-party and embedded AI entirely. Prevention requires using a broad definition that explicitly includes third-party AI.

Failure 2: Definition Too Broad

The symptom is an inventory with 500 items that the team cannot keep up with. This results from including trivial features like spell-check as "AI." Prevention means setting reasonable thresholds and focusing on systems with genuine governance implications.

Failure 3: Inventory Becomes Stale

The symptom is an inventory last updated 18 months ago with new AI systems missing entirely. The root cause is typically the absence of an ongoing registration process or update triggers. Prevention requires mandatory registration for new AI, periodic refresh cycles, and connection to procurement workflows.

The symptom is an inventory that exists but does not drive governance decisions. This happens when the inventory is treated as a documentation exercise without connection to risk management. Prevention means connecting the inventory to risk assessment, using it actively for compliance, and reporting on it regularly to leadership.

Failure 5: Owner Unclear or Unaccountable

The symptom is issues arising with no one taking responsibility. This occurs when the inventory has an "owner" field but no real accountability behind it. Prevention requires that owners be individuals rather than teams, that owners have actual authority over the system, and that escalation paths exist for ownership gaps.


Implementation Checklist

Planning

During the planning phase, ensure the AI definition is scoped and documented, inventory fields are designed, discovery methods are identified, and responsibility is assigned for ongoing inventory management.

Discovery

The discovery phase should cover IT and procurement records reviewed, department surveys completed, vendor features reviewed, and shadow AI discovery conducted across the organization.

Inventory Build

Building the initial inventory requires entering all discovered systems, assigning owners, determining risk levels, and setting review schedules for each entry.

Process Establishment

Establishing ongoing processes means documenting the registration policy, creating the registration workflow, defining update triggers, and planning compliance monitoring.

Integration

Full integration connects the inventory to risk assessment workflows, feeds compliance reporting, includes the inventory in incident response procedures, and reports status to governance leadership.


Metrics to Track

Inventory Completeness

Track the number of AI systems in your inventory alongside an estimated-versus-discovered gap analysis. Monitor how many systems lack assigned owners and how many are past due for review. These metrics reveal whether your inventory reflects reality or just a partial picture.

Registration Compliance

Measure whether new AI systems are registered before deployment. Track the time from discovery to registration and the registration rejection rate along with the reasons behind rejections. These indicators show whether your registration process is working or being bypassed.

Governance Effectiveness

Assess how many systems have completed risk assessments and how many high-risk systems have mitigation plans in place. Track audit findings related to undocumented AI. These metrics connect your inventory directly to governance outcomes.


Tooling Suggestions

Spreadsheets make the best starting point. They are simple, accessible, and familiar. A spreadsheet works well for organizations starting out or managing fewer than 50 AI systems, though it becomes limited at scale and lacks workflow automation.

GRC platforms offer a step up, as many governance, risk, and compliance platforms now include AI and model inventory modules. They provide strong integration with risk assessment and compliance workflows already in use.

Dedicated AI governance platforms are purpose-built for AI inventory and governance. They offer more sophisticated features but come with additional cost and complexity that must be justified.

IT asset management extensions represent another option, as some ITAM tools are adding AI tracking capabilities. This path works well if you want to integrate AI inventory with existing asset management infrastructure.


Conclusion

An AI model inventory is foundational to AI governance. You cannot assess risks, ensure compliance, or manage AI responsibly if you don't know what AI you're using.

Start simple. A spreadsheet with essential fields is better than nothing. Discover what AI already exists, which is likely more than you think. Establish a registration process for new AI. Connect the inventory to risk assessment and compliance workflows.

The inventory enables everything else in AI governance. Without it, you're governing blind.


Practical Next Steps

To put these insights into practice, begin by establishing a cross-functional governance committee with clear decision-making authority and regular review cadences. Document your current governance processes and identify gaps against regulatory requirements in your operating markets.

From there, create standardized templates for governance reviews, approval workflows, and compliance documentation. Schedule quarterly governance assessments to ensure your framework evolves alongside regulatory and organizational changes. Finally, build internal governance capabilities through targeted training programs for stakeholders across different business functions.

Common Questions

Comprehensive inventory is foundational for governance, risk management, and compliance. You can't govern what you don't know about. It enables impact analysis when policies change.

Audit procurement and expense records, survey teams, analyze network traffic, check SaaS inventories, and create safe reporting channels for undisclosed AI use.

Document system purpose, owner, data processed, risk classification, approval status, vendor details, and business context. Include both custom and vendor AI solutions.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  4. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  5. What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
  6. Model AI Governance Framework for Generative AI. Infocomm Media Development Authority (IMDA) (2024). View source
  7. OECD Principles on Artificial Intelligence. OECD (2019). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.