Back to Insights
AI Compliance & RegulationGuidePractitioner

AI and Consent: When Do You Need It and How to Obtain It Properly

October 24, 20259 min readMichael Lansdowne Hauge
For:Data Protection OfficersLegal CounselCompliance ManagersOperations Leaders

A comprehensive guide for DPOs on when consent is legally required for AI processing, how to obtain valid consent across Singapore, Malaysia, and Thailand, and common pitfalls to avoid.

Finance Compliance Review - ai compliance & regulation insights

Key Takeaways

  • 1.AI processing of personal data requires valid legal basis - consent is one option, not the only one
  • 2.Legitimate interest may apply for business AI use, but requires documented assessment
  • 3.Consent for AI must be specific, informed, and freely given - blanket consent is insufficient
  • 4.Implement clear opt-out mechanisms and honor data subject rights requests
  • 5.Keep consent records and regularly review whether your legal basis remains valid

Hero image placeholder: Illustration showing a consent checkbox with AI neural network pattern, balanced scale representing legal considerations, and Southeast Asian regulatory symbols
Alt text suggestion: Visual representation of AI consent decision-making showing legal balance and data protection elements

Executive Summary

  • Consent is not always the appropriate legal basis for AI processing — legitimate interests, contractual necessity, or legal obligations may apply depending on context
  • Singapore's PDPA, Malaysia's PDPA, and Thailand's PDPA each have specific consent requirements that differ from EU GDPR approaches
  • Automated decision-making with significant effects typically requires explicit consent or at minimum, notification and opt-out mechanisms
  • Consent must be freely given, specific, informed, and unambiguous — blanket consent buried in terms of service rarely meets this standard
  • Sensitive personal data (health, biometric, religious) triggers heightened consent requirements across all three jurisdictions
  • Documentation of consent — when obtained, how, and what was disclosed — is as critical as the consent itself
  • Consent can be withdrawn, and your AI systems must accommodate this operationally
  • Cross-border transfers for AI processing may require additional consent or safeguards

Why This Matters Now

The regulatory enforcement landscape for AI and data protection is maturing rapidly across Southeast Asia.

Singapore has signaled increased scrutiny of AI systems through PDPC advisories, with financial penalties reaching SGD 1 million for serious breaches. The Model AI Governance Framework emphasizes transparency and consent where appropriate.

Malaysia has amended its PDPA with enhanced provisions for automated decision-making, and enforcement actions have increased 40% year-over-year since 2024.

Thailand continues to operationalize its PDPA with sector-specific guidance, including explicit requirements for AI-driven profiling in financial services and healthcare.

For DPOs, the risk calculation has shifted. Getting consent wrong doesn't just create legal exposure — it creates operational chaos when systems must be paused, retrained, or data purged following enforcement action.


Definitions and Scope

Consent in AI deployments refers to a data subject's agreement to the processing of their personal data by AI systems. This includes:

  • Collection — Gathering data that will feed AI models
  • Processing — Using data for training, inference, or decision-making
  • Automated decisions — Making determinations without human intervention
  • Profiling — Building behavioral or predictive profiles

Consent is one of several legal bases for processing personal data. Others include:

Legal BasisWhen It AppliesAI Example
Contractual necessityProcessing required to fulfill a contractAI customer service for purchased products
Legitimate interestsOrganization's interests, balanced against data subject rightsFraud detection on transactions
Legal obligationRequired by lawAML screening using AI
Vital interestsLife-threatening emergenciesAI triage in medical emergencies
Public interestGovernment/public authority functionsPublic health surveillance AI

Jurisdiction-Specific Definitions

Singapore PDPA: Consent must be given "voluntarily" after receiving "information that would be reasonably required" for the data subject to make an informed decision.

Malaysia PDPA: Requires consent to be given by a data subject who "has been informed" of the purpose — emphasis on disclosure at time of collection.

Thailand PDPA: Distinguishes between general consent and explicit consent, with explicit consent required for sensitive data and cross-border transfers.



Step-by-Step Implementation Guide

Step 1: Map Your AI Data Flows

Before addressing consent, you need clarity on what personal data your AI systems touch.

Action items:

  • Inventory all AI systems (including embedded AI in vendor products)
  • Document data inputs, outputs, and retention periods
  • Classify data by sensitivity level
  • Identify data subjects affected (customers, employees, third parties)

Timeline: 2-4 weeks for initial mapping

For each AI use case, assess whether consent is required or another legal basis applies.

Action items:

  • Apply the decision tree above
  • Document your reasoning (regulators will ask)
  • For legitimate interests, complete a Legitimate Interests Assessment (LIA)
  • Consult legal counsel for borderline cases

Timeline: 1-2 weeks per major AI system

When consent is the appropriate legal basis, design mechanisms that meet validity criteria.

Validity criteria:

  • Freely given — No negative consequences for refusing; not bundled with other consents
  • Specific — Separate consent for each distinct purpose
  • Informed — Clear disclosure of what, why, who, and consequences
  • Unambiguous — Clear affirmative action (no pre-ticked boxes)

Action items:

  • Draft consent language in plain terms
  • Design user interface for consent capture
  • Build in granularity (allow consent for some AI uses but not others)
  • Create version control for consent forms

Timeline: 2-4 weeks for design and legal review

Technical implementation of consent mechanisms.

Action items:

  • Integrate consent capture at appropriate touchpoints
  • Ensure consent records are timestamped and immutable
  • Link consent status to data processing systems
  • Build withdrawal mechanisms that actually stop processing

Timeline: 4-8 weeks depending on system complexity

Consent must be as easy to withdraw as it was to give.

Action items:

  • Create clear withdrawal pathways (self-service preferred)
  • Ensure withdrawal propagates to all AI systems using the data
  • Define retention rules post-withdrawal (how long to keep, in what form)
  • Document operational procedures for processing withdrawal requests

Timeline: 2-4 weeks

Step 6: Document Everything

Your documentation is your defense.

Required documentation:

  • Legal basis assessment for each AI use case
  • Consent form versions and deployment dates
  • Record of consents obtained (who, when, what version)
  • Record of withdrawals and actions taken
  • Legitimate interests assessments where applicable

Timeline: Ongoing


Common Failure Modes

The problem: Burying AI processing consent in general T&Cs fails the "specific" and often "informed" requirements.

The fix: Separate, purpose-specific consent for AI processing at the point where it becomes relevant.

2. Assuming Legitimate Interests Covers Everything

The problem: Using legitimate interests without proper balancing tests, especially for high-risk AI uses like profiling.

The fix: Conduct and document Legitimate Interests Assessments. Be conservative — if in doubt, get consent.

3. No Mechanism for Withdrawal

The problem: Collecting consent but having no way for data subjects to withdraw it (or withdrawal doesn't actually stop processing).

The fix: Build withdrawal into the system from day one. Test that withdrawal actually propagates.

4. Insufficient Disclosure About AI Logic

The problem: Consent obtained without explaining that AI is involved or how it works.

The fix: Disclose: (1) AI is used, (2) what it does, (3) potential consequences, (4) human oversight available.

5. Ignoring Downstream Uses

The problem: Consent obtained for one purpose, but AI models trained on that data are used for other purposes.

The fix: Purpose specification must cover all intended uses. New purposes may require fresh consent.

6. Cross-Border Blindspots

The problem: Consent valid in Singapore may not meet Malaysian or Thai requirements if data flows across borders.

The fix: Design consent to meet the highest standard across your operating jurisdictions.


Pre-Implementation

  • AI data flows mapped and documented
  • Personal data types classified by sensitivity
  • Legal basis determined for each AI processing activity
  • Legitimate interests assessments completed where applicable
  • Consent mechanism designed for clarity and granularity
  • Consent language reviewed by legal counsel
  • Withdrawal mechanism designed

Implementation

  • Consent capture integrated at appropriate touchpoints
  • Consent records stored with timestamp and version
  • Consent status linked to AI processing systems
  • Withdrawal pathway functional and tested
  • Withdrawal propagation to all relevant systems verified

Ongoing Operations

  • Consent form versions tracked
  • Regular review of consent validity (purposes still accurate?)
  • Withdrawal requests processed within required timeframes
  • Documentation maintained and audit-ready
  • Staff trained on consent procedures

Audit Preparation

  • Legal basis assessments available for each AI use case
  • Consent records exportable for regulatory review
  • Withdrawal records and actions documented
  • Evidence of ongoing compliance monitoring

Metrics to Track

MetricTargetWhy It Matters
Consent rateBenchmark against industry (typically 60-80%)Low rates may indicate UX issues or trust problems
Withdrawal rate<5% monthlyHigh withdrawal may signal over-reach or poor disclosure
Time to process withdrawal<7 daysRegulatory expectation in most jurisdictions
Consent form version currency100% using current versionOutdated forms create compliance gaps
Documentation completeness100% of AI use cases documentedAudit readiness
Staff training completion100% of relevant staffOperational compliance

Tooling Suggestions

  • OneTrust — Enterprise-grade, strong APAC compliance features
  • TrustArc — Good integration capabilities
  • Cookiebot — Lighter weight, good for web-focused consent
  • Custom solutions — May be necessary for complex AI integrations

Selection Criteria

  • Jurisdiction coverage (SG, MY, TH requirements)
  • Integration with your AI systems
  • Audit trail and reporting capabilities
  • Scalability for your data volumes
  • API availability for automation

Build vs. Buy

For organizations with significant AI deployments, hybrid approaches often work best:

  • CMP for standard web/app consent
  • Custom integration layer linking consent status to AI pipelines
  • Data orchestration tools (Segment, mParticle) can help propagate consent status

Frequently Asked Questions


Next Steps

Getting consent right for AI is complex but manageable with proper frameworks in place. The investment in robust consent infrastructure pays dividends in:

  • Reduced regulatory risk
  • Increased customer trust
  • Operational clarity when scaling AI initiatives
  • Faster response to regulatory inquiries

For a comprehensive assessment of your AI consent posture and data protection compliance:

Book an AI Readiness Audit — Our assessment covers consent mechanisms, legal basis documentation, and cross-border compliance gaps specific to your AI deployments.


Disclaimer

This article provides general guidance on AI consent requirements and should not be construed as legal advice. Data protection requirements vary by jurisdiction, industry, and specific circumstances. Organizations should consult qualified legal counsel in their operating jurisdictions before implementing consent frameworks for AI systems.


References

  1. Personal Data Protection Commission Singapore. (2025). Advisory Guidelines on the Personal Data Protection Act. PDPC Singapore.

  2. Personal Data Protection Department Malaysia. (2025). Guidelines on Data Protection for AI Systems. JPDP Malaysia.

  3. Personal Data Protection Committee Thailand. (2024). PDPA Guidance on Consent and Automated Decision-Making. PDPC Thailand.

  4. Infocomm Media Development Authority Singapore. (2024). Model AI Governance Framework, Second Edition. IMDA.

  5. International Association of Privacy Professionals. (2025). APAC Privacy Law Comparison Guide. IAPP.


Related reading:

Frequently Asked Questions

Truly anonymized data is not personal data, so data protection laws (and consent requirements) don't apply. However, be cautious: if re-identification is possible, it's pseudonymized, not anonymized, and consent rules apply.

References

  1. Personal Data Protection Commission Singapore. (2025). *Advisory Guidelines on the Personal Data Protection Act*. PDPC Singapore.. (2025)
  2. Personal Data Protection Department Malaysia. (2025). *Guidelines on Data Protection for AI Systems*. JPDP Malaysia.. (2025)
  3. Personal Data Protection Committee Thailand. (2024). *PDPA Guidance on Consent and Automated Decision-Making*. PDPC Thailand.. (2024)
  4. Infocomm Media Development Authority Singapore. (2024). *Model AI Governance Framework, Second Edition*. IMDA.. Infocomm Media Development Authority Singapore *Model AI Governance Framework Second Edition* IMDA (2024)
  5. International Association of Privacy Professionals. (2025). *APAC Privacy Law Comparison Guide*. IAPP.. International Association of Privacy Professionals *APAC Privacy Law Comparison Guide* IAPP (2025)
Michael Lansdowne Hauge

Founder & Managing Partner

Founder & Managing Partner at Pertama Partners. Founder of Pertama Group.

consentpdpadata protectioncompliancedposingaporemalaysiathailand

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit