Back to Insights
Board & Executive OversightChecklistBeginner

25 AI Questions Every Board Should Ask Management

January 3, 202612 min readMichael Lansdowne Hauge
For:Board DirectorsBoard ChairsAudit Committee MembersNon-Executive Directors

A structured framework of 25 essential questions for board members to evaluate AI strategy, risk, governance, operations, and ethics. Practical oversight without technical expertise.

Indian Woman Boardroom - board & executive oversight insights

Key Takeaways

  • 1.Use these 25 questions to evaluate AI governance maturity
  • 2.Assess strategy, risk, ethics, and compliance in board discussions
  • 3.Identify gaps in management's AI oversight approach
  • 4.Create accountability through structured questioning
  • 5.Benchmark your AI governance against best practices

Boards of directors face a new challenge: providing effective oversight of artificial intelligence without deep technical expertise. The good news is that you don't need to understand neural networks to govern AI responsibly. You need the right questions.

This guide provides 25 essential questions organized into five categories. Use them to evaluate your organization's AI strategy, risk posture, and governance maturity.


Executive Summary

  • Fiduciary duty extends to AI — Boards must understand material AI risks and opportunities, just as they do for cybersecurity or financial controls
  • Structured inquiry beats ad-hoc questions — A consistent framework ensures comprehensive coverage and tracks progress over time
  • Five categories cover the landscape — Strategy, Risk, Governance, Operations, and Ethics capture the full scope of board-level AI oversight
  • Questions reveal organizational maturity — How management answers (not just what they answer) indicates governance sophistication
  • Document everything — Board minutes should reflect AI discussions for regulatory and audit purposes
  • Update questions annually — As AI capabilities and regulations evolve, so should your inquiry framework
  • Start with five questions — If you're new to AI oversight, begin with one question from each category

Why This Matters Now

Three forces are converging to make AI board oversight urgent:

Regulatory Expectation. Singapore's IMDA, Malaysia's MDEC, and Thailand's DEPA are all issuing AI governance guidelines. Regulators increasingly expect board-level awareness of AI activities. In financial services, MAS guidelines specifically mention board oversight of AI/ML models. (/insights/ai-regulations-singapore-imda-compliance) (/insights/ai-regulations-malaysia-mdec-framework) (/insights/ai-regulations-thailand-depa-compliance)

Fiduciary Risk. AI systems now make decisions that affect customers, employees, and business outcomes. A biased hiring algorithm or a customer-facing chatbot that provides incorrect information creates liability. Directors may face personal exposure if they failed to exercise reasonable oversight. (/insights/ai-legal-liability)

Competitive Pressure. AI adoption is accelerating. Boards that don't engage with AI strategy risk either missing opportunities (if too cautious) or accumulating hidden risks (if too permissive). (/insights/ai-board-oversight-directors-guide)

The bottom line: AI is no longer a technical curiosity. It's a board-level governance matter.


Definitions and Scope

AI at the board level refers to systems that make or support significant business decisions through pattern recognition, prediction, or content generation. This includes:

  • Customer-facing chatbots and virtual assistants
  • Fraud detection and credit scoring models
  • Recruitment and HR decision-support tools
  • Demand forecasting and inventory optimization
  • Marketing personalization engines
  • Document processing and extraction systems

Board oversight means ensuring:

  1. Management has a clear AI strategy aligned with business objectives
  2. Risks are identified, assessed, and mitigated appropriately
  3. Governance structures exist with clear accountability
  4. Compliance requirements are met
  5. Ethical considerations are addressed

You don't need to understand how AI works technically. You need to ensure management does, and that they've established appropriate controls.


The 25 Questions

Category 1: Strategy (Questions 1-5)

These questions assess whether AI initiatives align with business objectives and whether resources are allocated appropriately.

1. What is our AI strategy, and how does it connect to our overall business strategy?

What good looks like: A clear narrative linking AI initiatives to specific business outcomes (revenue growth, cost reduction, customer experience improvement). Red flag: AI projects pursued because "everyone is doing it."

2. What AI systems are currently in production, and which are in development?

What good looks like: A documented inventory with business owners, use cases, and deployment status. Red flag: Management cannot provide a complete list, indicating shadow AI. (/insights/ai-model-inventory-document-track-systems)

3. How do we prioritize AI investments, and what is the total AI spend?

What good looks like: A prioritization framework based on business impact, feasibility, and risk. Clear budget allocation across build vs. buy, infrastructure, and talent. Red flag: No centralized view of AI spending. (/insights/ai-prioritization-matrix)

4. What competitive advantages are we building or defending with AI?

What good looks like: Specific examples of differentiation, efficiency gains, or defensive moves against AI-native competitors. Red flag: Generic statements about "staying current."

5. What is our build vs. buy vs. partner philosophy for AI capabilities?

What good looks like: Clear criteria for when to build custom solutions, purchase vendor products, or partner. Awareness of vendor dependency risks. Red flag: Purely opportunistic decisions. (/insights/ai-vendor-evaluation-framework-choose-partner)


Category 2: Risk (Questions 6-10)

These questions assess whether AI risks are identified, measured, and managed within the organization's risk appetite.

6. What is our AI risk framework, and how does it integrate with enterprise risk management?

What good looks like: Documented framework covering AI-specific risks (bias, drift, security) integrated with existing ERM processes. Red flag: AI risk managed separately or not at all. (/insights/ai-risk-assessment-framework-templates)

7. What are the top five AI risks we face, and how are we mitigating them?

What good looks like: Specific, prioritized risks with named owners, mitigation plans, and residual risk assessments. Red flag: Generic risks without specifics. (/insights/ai-risks-executives-must-understand)

8. How do we monitor AI systems for performance degradation or unintended behavior?

What good looks like: Continuous monitoring dashboards, drift detection, and defined thresholds for intervention. Red flag: "We check manually when issues arise." (/insights/ai-model-monitoring-drift-detection)

9. What is our incident response plan for AI failures or breaches?

What good looks like: Documented playbook with escalation paths, communication templates, and post-incident review process. Red flag: No AI-specific incident response. (/insights/ai-incident-response-plan)

10. How do we assess third-party AI vendor risks?

What good looks like: Structured vendor risk assessment covering security, compliance, model transparency, and exit strategies. Red flag: Reliance on vendor certifications only. (/insights/ai-vendor-security-assessment-checklist)


Category 3: Governance (Questions 11-15)

These questions assess whether clear accountability structures exist for AI decision-making.

11. Who is accountable for AI governance, and what is the reporting structure to the board?

What good looks like: Named executive (often CRO, CTO, or dedicated AI lead) with clear mandate. Regular board reporting cadence. Red flag: Governance by committee with diffuse accountability. (/insights/ai-board-reporting-template-updates)

12. Do we have an AI governance committee, and what are its responsibilities?

What good looks like: Cross-functional committee with clear charter, meeting cadence, and decision authority. Red flag: Committee exists on paper but rarely meets. (/insights/ai-governance-committee-setup-guide)

13. What AI policies do we have, and when were they last updated?

What good looks like: Documented policies covering acceptable use, data handling, model approval, and ethical principles. Updated within past 12 months. Red flag: No policies or outdated documents. (/insights/ai-policy-essential-components) (/insights/ai-acceptable-use-policy-template)

14. How do we approve new AI use cases before deployment?

What good looks like: Defined approval workflow with risk assessment, stakeholder sign-off, and documentation. Red flag: Business units deploy AI tools without central awareness. (/insights/ai-approval-workflow-designing-governance-processes)

15. How do we handle AI policy exceptions?

What good looks like: Formal exception process with approval authority, documentation, and time limits. Red flag: No exception process (rigid) or too many exceptions (ineffective). (/insights/ai-policy-exceptions-process)


Category 4: Operations (Questions 16-20)

These questions assess whether AI systems are managed effectively throughout their lifecycle.

16. How do we ensure AI model quality before deployment?

What good looks like: Testing protocols covering accuracy, fairness, edge cases, and adversarial inputs. Staged rollout process. Red flag: Deploy and pray. (/insights/ai-security-testing-vulnerability-assessment)

17. What is our approach to AI training and skills development?

What good looks like: Tiered training program covering literacy (all employees), practitioners (AI builders), and governance (leaders). Red flag: No formal training program. (/insights/designing-ai-training-program-framework-ld-leaders)

18. How do we manage AI models throughout their lifecycle?

What good looks like: Model lifecycle process from development through retirement, including versioning, monitoring, and sunset criteria. Red flag: Models deployed and forgotten. (/insights/ai-model-inventory-document-track-systems)

19. What is our approach to data quality for AI systems?

What good looks like: Data governance program covering sourcing, validation, lineage, and access controls. Red flag: "We use whatever data is available." (/insights/ai-data-classification-categorizing-data)

20. How do we ensure business continuity if critical AI systems fail?

What good looks like: Documented fallback procedures, manual overrides, and recovery time objectives. Red flag: Critical dependency without backup plans.


Category 5: Ethics and Compliance (Questions 21-25)

These questions assess whether AI systems operate fairly, transparently, and within legal boundaries.

21. How do we ensure our AI systems are fair and unbiased?

What good looks like: Bias testing during development, ongoing monitoring, and remediation processes. Diverse development teams. Red flag: "We trust the model." (/insights/ai-bias-risk-assessment)

22. What is our approach to AI transparency and explainability?

What good looks like: Defined standards for explainability based on use case risk level. Customer-facing explanations where required. Red flag: Black-box models in high-stakes decisions.

23. How do we comply with data protection requirements for AI?

What good looks like: PDPA/privacy impact assessments for AI systems, consent management, and data minimization practices. Red flag: Privacy treated as afterthought. (/insights/pdpa-ai-compliance-singapore-guide) (/insights/malaysia-pdpa-ai-compliance-guide)

24. What regulatory requirements apply to our AI systems, and are we compliant?

What good looks like: Regulatory mapping by jurisdiction and industry. Compliance gap analysis and remediation plans. Red flag: No regulatory inventory. (/insights/ai-compliance-checklist-regulatory-preparation)

25. What mechanisms exist for stakeholders to raise AI-related concerns?

What good looks like: Clear channels for employees, customers, and partners to report issues. Protection for whistleblowers. Red flag: No reporting mechanism.


How to Use This Framework

Establish a Cadence. Don't ask all 25 questions in one meeting. Rotate through categories:

  • Q1: Strategy + Risk (10 questions)
  • Q2: Governance + Operations (10 questions)
  • Q3: Ethics/Compliance + Strategy follow-up (7-8 questions)
  • Q4: Annual comprehensive review

Document Responses. Record answers in board minutes or a dedicated AI governance log. This creates an audit trail and enables progress tracking.

Request Evidence. Good answers should be backed by documentation. Ask to see the AI inventory, risk register, or policy documents referenced.

Track Action Items. Each session should generate specific action items with owners and deadlines. Review completion at subsequent meetings.


RACI Matrix: Who Answers Each Category

CategoryResponsible (Answers)AccountableConsultedInformed
StrategyCEO, CDO/CTOCEOCFO, Business Unit HeadsBoard
RiskCRO, CISOCROLegal, IT SecurityBoard, Audit Committee
GovernanceAI Governance LeadCEO/CROLegal, HR, ITBoard
OperationsCTO/CDOCOOIT, Data TeamsBoard
Ethics/ComplianceCCO, DPOCEOLegal, HRBoard, Audit Committee

Common Failure Modes

Asking Without Following Through. Questions are asked, answers are given, nothing changes. Fix: Assign action items with deadlines and track completion.

Accepting Buzzwords. Management responds with jargon ("We use responsible AI") without specifics. Fix: Ask for examples, metrics, and documentation.

Delegating to a Single Meeting. AI is discussed once per year in a "deep dive" and ignored otherwise. Fix: Integrate AI questions into regular reporting.

Focusing Only on Opportunities. Enthusiasm for AI benefits without attention to risks. Fix: Balance strategy questions with risk and governance questions.

Overloading Management. Requesting extensive documentation that distracts from actual governance work. Fix: Focus on material risks and existing artifacts.

Ignoring Negative Signals. When management can't answer a question, moving on without follow-up. Fix: Unanswered questions become priority action items.


Board AI Inquiry Checklist

Before the Meeting:

  • Select 5-10 questions based on current priorities
  • Request relevant documentation in advance
  • Review previous meeting action items
  • Identify specific AI systems or initiatives to discuss

During the Meeting:

  • Ask open-ended questions, not yes/no
  • Request evidence for assertions
  • Note gaps in knowledge or documentation
  • Assign action items with owners and deadlines
  • Schedule follow-up for unresolved items

After the Meeting:

  • Document questions, answers, and action items in minutes
  • Distribute action items to owners
  • Update AI governance tracking document
  • Identify topics for next session

Metrics to Track

Inquiry Metrics:

  • Percentage of 25 questions addressed in the past 12 months
  • Number of action items generated per session
  • Action item closure rate (target: >80%)
  • Days to close action items (target: <90)

Governance Maturity Indicators:

  • AI inventory completeness (% of systems documented)
  • Policy currency (% of policies updated in past 12 months)
  • Incident count and severity trend
  • Training completion rates
  • Audit findings related to AI

Tooling Suggestions

Board Portal. Most organizations use board management software. Create a dedicated AI governance section with policies, inventories, and meeting materials.

Question Tracking. Simple spreadsheet or board portal feature tracking which questions have been asked, when, and answers received.

Dashboard. Request a one-page AI governance dashboard for each board meeting showing key metrics, risk status, and project updates. (/insights/ai-executive-dashboard-metrics)

Document Repository. Centralized location for AI policies, risk assessments, and audit reports. Board members should have access.


Frequently Asked Questions


Ready to Strengthen Your AI Oversight?

Asking the right questions is the first step. Understanding the answers—and acting on them—requires depth.

Book an AI Readiness Audit to get an independent assessment of your organization's AI governance maturity. We'll evaluate your policies, risk management, and operational readiness, then provide actionable recommendations for board and management.

[Contact Pertama Partners →]


References

  1. Singapore Academy of Law. (2024). "AI Governance and the Board: Legal Considerations."
  2. IMDA & PDPC. (2023). "AI Governance Framework - Second Edition."
  3. Institute of Directors Singapore. (2024). "Board AI Oversight Guide."
  4. MAS. (2023). "FEAT Principles Assessment Methodology."
  5. OECD. (2024). "Corporate Governance of AI: Board Responsibilities."
  6. PwC. (2024). "Governing AI: A Board Toolkit."
  7. World Economic Forum. (2024). "AI Governance Alliance: Board Briefing."

Frequently Asked Questions

No. These questions are designed for non-technical board members. You're evaluating management's ability to govern AI, not the AI itself. Consider adding AI expertise over time, but it's not a prerequisite.

References

  1. AI Governance and the Board: Legal Considerations.. Singapore Academy of Law (2024)
  2. AI Governance Framework - Second Edition.. IMDA & PDPC (2023)
  3. Board AI Oversight Guide.. Institute of Directors Singapore (2024)
  4. FEAT Principles Assessment Methodology.. MAS (2023)
  5. Corporate Governance of AI: Board Responsibilities.. OECD (2024)
  6. Governing AI: A Board Toolkit.. PwC (2024)
  7. AI Governance Alliance: Board Briefing.. World Economic Forum (2024)
Michael Lansdowne Hauge

Founder & Managing Partner

Founder & Managing Partner at Pertama Partners. Founder of Pertama Group.

board governanceAI oversightexecutive leadershipgovernance frameworkfiduciary dutyAI strategyboard ai questions checklistdirector ai oversight dutiesai governance for boards

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit