Back to Insights
AI Security & Data ProtectionChecklistPractitioner

50 Security Questions to Ask Your AI Vendor (With Red Flag Answers)

October 17, 202512 min readMichael Lansdowne Hauge
For:IT DirectorsProcurement LeadersSecurity EngineersCompliance Officers

50 essential security questions for AI vendor evaluation across data handling, security controls, compliance, and AI-specific concerns. Includes red flag answer indicators.

Tech Code Review - ai security & data protection insights

Key Takeaways

  • 1.Vague answers about 'industry standard security' are a red flag for inadequate practices
  • 2.Ask specifically how customer data is used for model training and improvement
  • 3.Request documentation of data residency, encryption methods, and access controls
  • 4.Evasive responses about incident history or breach notification procedures indicate risk
  • 5.The best vendors welcome detailed security questions and provide specific answers

50 Security Questions to Ask Your AI Vendor (With Red Flag Answers)

The right questions reveal whether an AI vendor's security claims are genuine or just marketing. This guide provides 50 specific questions across six critical categories, along with red flag answers that should give you pause.

Executive Summary

  • Generic security questions miss AI-specific risks. Traditional vendor questionnaires don't cover training data usage, model security, or AI-unique vulnerabilities.
  • How vendors answer matters as much as what they answer. Vague, evasive, or overly-qualified responses often indicate problems.
  • Documentation should support verbal claims. Answers without evidence are just promises.
  • Red flags don't always mean rejection. Some gaps can be addressed; others are fundamental problems.
  • Prioritize based on your data sensitivity. Not every question carries equal weight for every use case.
  • Use these questions to compare vendors. Consistent questioning enables meaningful comparison.

How to Use This Guide

  1. Customize for your context. Select questions based on data sensitivity and use case.
  2. Send in advance. Give vendors time to provide thoughtful, documented responses.
  3. Follow up on vague answers. Press for specifics when responses are unclear.
  4. Request evidence. Ask for documentation to support claims.
  5. Track red flags. Use the scoring guidance to inform your decision.

Category 1: Data Handling (10 Questions)

Q1: How is our data used after we submit it for processing?

Expected: Clear statement of use limited to providing the service. Red Flag: "Data may be used to improve our services" without opt-out.

Q2: Is our data ever used to train your AI models?

Expected: "No, customer data is not used for training" with written commitment. Red Flag: "By default yes, but you can opt out" or vague language about "service improvement."

Q3: Who within your organization can access our data?

Expected: Limited, defined roles with access logging. Red Flag: "All employees have access as needed" or inability to specify.

Q4: Where is our data processed geographically?

Expected: Specific data centers/regions identified. Red Flag: "It depends" or "globally" without specifics.

Q5: Where is our data stored?

Expected: Specific storage locations with any replication noted. Red Flag: Vague answers or unwillingness to disclose.

Q6: How long is our data retained?

Expected: Clear retention periods with rationale. Red Flag: "Indefinitely" or "we don't delete data."

Q7: How can we verify that our data has been deleted?

Expected: Documented deletion process with confirmation mechanism. Red Flag: "You'll have to trust us" or no verification available.

Q8: What happens to our data if we terminate the contract?

Expected: Clear data return and deletion procedures with timeline. Red Flag: No defined process or extended retention post-termination.

Q9: Do you use subprocessors who access our data?

Expected: Disclosed list with data handling requirements for each. Red Flag: "We don't know" or unwillingness to disclose.

Q10: How do you ensure subprocessors meet your security standards?

Expected: Documented assessment process and contractual requirements. Red Flag: "We trust them" or no formal process.


Category 2: Security Controls (10 Questions)

Q11: What encryption do you use for data at rest?

Expected: Specific algorithms (AES-256), key management details. Red Flag: "Yes, we use encryption" without specifics.

Q12: What encryption do you use for data in transit?

Expected: TLS 1.2+ with specific cipher suites. Red Flag: Older TLS versions or inability to specify.

Q13: How do you manage encryption keys?

Expected: HSM usage, key rotation policies, access controls. Red Flag: "Our IT team manages them" without formal process.

Q14: Describe your authentication mechanisms for API access.

Expected: API keys, OAuth, MFA options with clear documentation. Red Flag: Basic authentication only or no MFA option.

Q15: How do you control access to customer environments?

Expected: Role-based access, least privilege, regular reviews. Red Flag: Shared credentials or no access reviews.

Q16: What logging do you maintain for access to our data?

Expected: Comprehensive audit logs with retention and review process. Red Flag: "We log important events" without specifics.

Q17: How is your production environment segmented from development?

Expected: Complete separation with defined controls. Red Flag: "Developers can access production when needed."

Q18: What vulnerability scanning do you perform?

Expected: Regular automated scanning with defined remediation SLAs. Red Flag: "We scan periodically" without frequency or follow-up process.

Q19: When was your last penetration test and what was the scope?

Expected: Within last 12 months, covering relevant services, remediation complete. Red Flag: No penetration testing or tests older than 18 months.

Q20: How quickly do you patch critical vulnerabilities?

Expected: Defined SLAs (e.g., critical within 24-48 hours). Red Flag: "As soon as possible" without specific commitments.


Category 3: Compliance and Certifications (8 Questions)

Q21: What security certifications do you hold?

Expected: SOC 2 Type II, ISO 27001, or equivalent. Red Flag: "We're working toward certification" without timeline.

Q22: Can we review your SOC 2 Type II report?

Expected: Report provided under NDA without hesitation. Red Flag: "We only share summaries" or report is Type I only.

Q23: What is the scope of your ISO 27001 certification?

Expected: Scope clearly covers the services you'll use. Red Flag: Certification scope is limited or unclear.

Q24: How do you ensure compliance with Singapore PDPA?

Expected: Specific practices, DPO appointment, documentation. Red Flag: "We follow best practices" without specifics.

Q25: How do you ensure compliance with Malaysia PDPA?

Expected: Specific practices addressing Malaysia requirements. Red Flag: No awareness of Malaysia-specific requirements.

Q26: How do you ensure compliance with Thailand PDPA?

Expected: Specific practices addressing Thailand requirements. Red Flag: No awareness of Thailand-specific requirements.

Q27: Do you have a Data Protection Officer?

Expected: Named DPO with contact information and clear responsibilities. Red Flag: "Security is everyone's responsibility" without specific ownership.

Q28: How do you stay current with regulatory changes?

Expected: Defined process, legal review, update mechanism. Red Flag: "Our lawyers handle it" without specific process.


Category 4: Incident Response (7 Questions)

Q29: Do you have a documented incident response plan?

Expected: Yes, with regular testing and defined roles. Red Flag: "We handle incidents as they arise" without formal plan.

Q30: How quickly will you notify us of a data breach?

Expected: Specific timeline (24-72 hours) with contractual commitment. Red Flag: "As soon as reasonably practicable" without definition.

Q31: What information will you provide in a breach notification?

Expected: Defined format with scope, impact, and remediation. Red Flag: "Whatever we know at the time" without specifics.

Q32: Have you experienced any security incidents in the past 24 months?

Expected: Honest answer with lessons learned and improvements made. Red Flag: "No" without hesitation (may indicate lack of detection or transparency).

Q33: How do you detect security incidents?

Expected: SIEM, monitoring, alerting with defined thresholds. Red Flag: "We investigate when things seem wrong" without proactive detection.

Q34: What is your mean time to detect (MTTD) and respond (MTTR)?

Expected: Defined metrics with continuous improvement focus. Red Flag: "We don't track that" or metrics measured in days/weeks.

Q35: Do you conduct post-incident reviews?

Expected: Yes, with documented learnings and remediation tracking. Red Flag: "We move on and fix the problem" without formal review.


Category 5: AI-Specific Security (10 Questions)

Q36: How do you protect against prompt injection attacks?

Expected: Specific techniques (input validation, output filtering, sandboxing). Red Flag: "What's prompt injection?" or "our model is secure."

Q37: How do you prevent training data leakage through model outputs?

Expected: Specific techniques (differential privacy, output monitoring). Red Flag: Unaware of the risk or no specific controls.

Q38: What testing do you perform for AI-specific vulnerabilities?

Expected: Red team testing, AI security assessments, adversarial testing. Red Flag: "We test like any other software" without AI-specific focus.

Q39: How do you monitor for AI model misuse?

Expected: Usage monitoring, anomaly detection, abuse patterns. Red Flag: "Users agree to terms of service" without technical controls.

Q40: How is your AI model protected from extraction or theft?

Expected: Access controls, rate limiting, monitoring for extraction patterns. Red Flag: No awareness of model extraction risks.

Q41: What happens if your AI produces harmful or incorrect outputs?

Expected: Output monitoring, user reporting, rapid response process. Red Flag: "Users should verify outputs" with no vendor responsibility.

Q42: How do you handle AI bias in your models?

Expected: Testing for bias, ongoing monitoring, correction processes. Red Flag: "Our model is unbiased" without testing evidence.

Q43: Can you provide documentation on your AI training data sources?

Expected: Documented sources with rights verification. Red Flag: Unwilling to discuss training data origins.

Q44: How do you isolate different customers' AI contexts?

Expected: Clear isolation mechanisms, no cross-customer data leakage. Red Flag: "Our system handles this automatically" without specifics.

Q45: Do you have an AI-specific security policy?

Expected: Documented policy covering AI-unique risks. Red Flag: "Our general security policy covers this" without AI-specific elements.


Category 6: Contractual and Operational (5 Questions)

Q46: Can you sign our Data Processing Agreement?

Expected: Yes, or provide acceptable standard DPA. Red Flag: "We don't sign customer agreements" or heavy pushback.

Expected: Defined response times for different severity levels. Red Flag: "We prioritize security" without specific commitments.

Q48: How do you notify customers of security-relevant changes?

Expected: Defined notification process with reasonable lead time. Red Flag: "Changes are in our changelog" without proactive notification.

Q49: Can we conduct our own security assessment of your service?

Expected: Yes, with reasonable scope and scheduling. Red Flag: "Our certifications should be sufficient" without customer testing option.

Q50: What happens to our data and access if you are acquired?

Expected: Defined process with customer notification and data rights. Red Flag: "Standard asset transfer" without customer protections.


Scoring and Decision Framework

After collecting responses, score each category:

ScoreCriteria
GreenClear answers, documentation provided, no concerns
YellowAcceptable answers but some gaps or areas for improvement
RedRed flag answers or unable to answer

Decision guidance:

  • All Green: Proceed with standard contract negotiations
  • Mostly Green, some Yellow: Proceed with documented remediation requirements
  • Any Red in Categories 1-2: Significant concerns, require remediation before proceeding
  • Any Red in Category 4-5: Elevated risk, consider alternatives
  • Multiple Reds: Walk away or limit data exposure significantly

FAQ

Q: Should we ask all 50 questions? A: Prioritize based on data sensitivity. Critical vendors handling sensitive data warrant full assessment.

Q: What if vendors push back on questions? A: Legitimate vendors welcome thorough assessment. Pushback itself is a yellow flag.

Q: How do we handle vendors who won't answer certain questions? A: Document refusals. Consider whether the gap is acceptable given your risk tolerance.

Q: Should these questions be in the contract? A: Key commitments (data usage, breach notification, deletion) should be contractual, not just questionnaire responses.


Next Steps

These questions support broader vendor assessment:


Book an AI Readiness Audit

Need help evaluating AI vendors? Our AI Readiness Audit includes vendor assessment support and security evaluation.

Book an AI Readiness Audit →


References

  1. Cloud Security Alliance. Consensus Assessment Initiative Questionnaire (CAIQ).
  2. OWASP. AI Security and Privacy Guide.
  3. Singapore PDPC. Guide to Managing Data Intermediaries.
  4. ISO/IEC 27001:2022 & SOC 2 Trust Services Criteria.

Frequently Asked Questions

Watch for vague responses like "industry standard security," reluctance to share documentation, inability to specify data locations, unclear data retention policies, and evasiveness about incident history or breach notification procedures.

Ask whether your data will be used for model training, if you can opt out, how training data is protected, what data provenance verification exists, and whether the vendor can demonstrate the source of their training data.

Request audit reports (not just certification logos), conduct reference checks with similar customers, perform technical assessments during POC, and include audit rights in contracts.

References

  1. Cloud Security Alliance. Consensus Assessment Initiative Questionnaire (CAIQ).. Cloud Security Alliance Consensus Assessment Initiative Questionnaire
  2. OWASP. AI Security and Privacy Guide.. OWASP AI Security and Privacy Guide
  3. Singapore PDPC. Guide to Managing Data Intermediaries.. Singapore PDPC Guide to Managing Data Intermediaries
  4. ISO/IEC 27001:2022 & SOC 2 Trust Services Criteria.. ISO/IEC & SOC Trust Services Criteria (2022)
Michael Lansdowne Hauge

Founder & Managing Partner

Founder & Managing Partner at Pertama Partners. Founder of Pertama Group.

ai vendor questionssecurity questionnairevendor evaluationAI vendor security assessment frameworkthird party AI risk evaluationAI vendor compliance requirementsSaaS AI security checklistenterprise AI security vetting

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit