Back to Insights
AI Governance & Risk ManagementGuide

AI and Data Privacy: GDPR, CCPA, and Beyond

May 16, 202513 min readMichael Lansdowne Hauge
For:CISOLegal/ComplianceCTO/CIOCHROIT ManagerConsultantCMOData Science/ML

Comprehensive guide to data privacy regulations affecting AI systems - GDPR, CCPA, CPRA, state privacy laws, and international frameworks - with practical compliance strategies for AI training, deployment, and automated decision-making.

Summarize and fact-check this article with:
Malaysian Executive - ai governance & risk management insights

Key Takeaways

  • 1.GDPR provides the most comprehensive framework for AI-related data privacy, with strict rules on lawful basis, profiling, automated decisions, and cross-border transfers.
  • 2.CPRA extends CCPA with explicit rights around automated decision-making, profiling, and sensitive personal information, making California the de facto floor for US AI privacy compliance.
  • 3.Technical constraints around explainability and untraining mean AI teams must design for privacy from the outset, not retrofit compliance after deployment.
  • 4.Automated decisions with legal or similarly significant effects trigger heightened safeguards across GDPR, CPRA, and several state and international laws.
  • 5.Profiling and personalization are widely regulated and require clear transparency, even when decisions are not legally or economically high-stakes.
  • 6.DPIAs and similar impact assessments are becoming mandatory for high-risk AI and should be integrated into standard AI development lifecycles.
  • 7.Vendor and third-party AI use demands robust contracts, role definitions, and due diligence to manage shared privacy and regulatory risk.

Executive Summary: AI systems fundamentally rely on personal data for training, operation, and decision-making, creating complex intersections with data privacy laws worldwide. The EU's GDPR establishes comprehensive requirements for AI including lawful basis for processing, data minimization, purpose limitation, automated decision-making rights, and data protection by design. California's CPRA extends CCPA with automated decision-making opt-outs, profiling disclosures, and sensitive personal information restrictions. Other US state privacy laws in Virginia, Colorado, Connecticut, and Utah create a patchwork of compliance obligations. International frameworks in the UK, Canada, Brazil, and emerging regulations in Asia add further complexity. Organizations deploying AI must navigate consent requirements for training data, transparency obligations for automated decisions, data subject rights covering access, deletion, and portability, cross-border transfer restrictions, and vendor accountability, with penalties ranging from administrative fines to class action liability reaching billions of dollars.

AI-Specific Privacy Challenges

AI creates a category of privacy compliance challenges that traditional data processing frameworks were never designed to address. Training datasets may contain the personal data of millions of individuals, often collected across disparate contexts and timeframes. Purpose limitation principles come into direct tension with model reuse and transfer learning, where a model trained for one objective is repurposed for another. The right to deletion presents a particularly thorny technical problem: organizations cannot easily "untrain" a model once an individual's data has shaped its parameters. Explainability requirements, meanwhile, demand that organizations provide intelligible accounts of how complex neural networks reach their conclusions, a task that remains at the frontier of AI research.

Beyond these foundational tensions, profiling and automated decision-making trigger additional rights and obligations under virtually every major privacy regime. And the growing reliance on vendor AI, including third-party foundation models and API-based inference services, complicates the traditional allocation of responsibility between data controllers and data processors.

GDPR: Comprehensive EU Framework

Core Principles Applied to AI

The GDPR's six core principles under Article 5 apply with full force to AI systems, though each raises distinct implementation questions.

Lawfulness, Fairness, and Transparency require that every act of AI processing rest on a valid legal basis, whether consent, contract performance, legitimate interest, or another ground specified in the regulation. Individuals must be able to understand the role AI plays in decisions that affect them, and organizations cannot use personal data for AI training without establishing an appropriate legal basis before processing begins.

Purpose Limitation demands that data collected for one stated purpose not be repurposed for incompatible AI applications. This principle creates friction with the common practice of training general-purpose models on data originally gathered for narrower objectives. Transfer learning and model reuse require careful analysis of whether the new purpose remains compatible with the original collection context.

Data Minimization obliges organizations to collect only the personal data genuinely necessary for AI training and operation. In practice, this means assessing whether full-fidelity personal data is truly required or whether aggregation, anonymization, synthetic data generation, or federated learning techniques could achieve comparable outcomes with less privacy intrusion.

Accuracy requires that training data be accurate and kept up to date. Inaccurate training data propagates through model weights, producing biased or incorrect AI decisions at scale. The obligation to correct inaccurate data may in turn necessitate model retraining, an expensive and operationally disruptive exercise.

Storage Limitation restricts retention of personal data to the period necessary for its processing purpose. For AI, this raises the question of whether training data must be deleted once model training concludes, particularly given that models themselves may memorize fragments of personal data from their training corpus.

Integrity and Confidentiality mandate appropriate security measures for both training datasets and deployed AI models. This encompasses preventing unauthorized access, defending against model theft and adversarial attacks, and implementing encryption, access controls, and secure deployment architectures.

Lawful Basis for AI Processing

Establishing a lawful basis under Article 6 is a prerequisite for any AI processing of personal data, and the choice of basis carries significant downstream consequences.

Consent under Article 6(1)(a) must be freely given, specific, informed, and unambiguous. For AI, this implies granular consent that distinguishes between training, inference, and profiling purposes. The right to withdraw consent raises the difficult question of how to "untrain" a model. In practice, consent is often impractical as the sole basis for large-scale AI training datasets given the volume of individuals involved.

Contract Performance under Article 6(1)(b) permits AI processing that is necessary to fulfill contractual obligations. Fraud detection AI for payment processing is a common example. However, this basis is limited to purposes strictly necessary for the contract and cannot be stretched to cover ancillary AI applications.

Legitimate Interest under Article 6(1)(f) offers the most flexibility for many B2B and certain B2C AI use cases. It requires a documented balancing test weighing the organization's interests against the individual's rights and reasonable expectations. A formal Legitimate Interest Assessment (LIA) should be prepared and retained. Typical applications include product recommendations, customer analytics, and internal operational optimization. This basis is generally unsuitable for high-risk profiling or processing of most special category data.

Legal Obligation under Article 6(1)(c) applies where AI processing is required to comply with a legal requirement, such as AML/KYC systems at financial institutions.

Special Category Data under Article 9 imposes a higher bar for processing biometric data (facial recognition, voiceprints), health data, and data revealing racial or ethnic origin. Such processing requires explicit consent or another Article 9 basis. AI systems that ingest special category data face stricter requirements and will nearly always require a Data Protection Impact Assessment.

Automated Decision-Making and Profiling (Articles 22, 13-15)

Article 22 establishes the right not to be subject to a decision based solely on automated processing, including AI, where that decision produces legal effects or similarly significantly affects the individual. Credit denials, employment rejections, and automated benefit determinations are paradigmatic examples.

Such solely automated decision-making is generally prohibited unless it is necessary for contract performance, authorized by EU or Member State law, or based on explicit consent. Even where one of these exceptions applies, organizations must implement meaningful safeguards: the right to obtain human intervention, the right to express one's point of view, the right to contest the decision, and access to meaningful information about the logic involved.

In practical terms, this means building a genuine human-in-the-loop for consequential AI decisions. The human reviewer must possess real authority to override the AI recommendation rather than merely rubber-stamping automated outputs. Organizations must also develop explainability sufficient for individuals to understand the factors driving a decision, along with formal appeal mechanisms and human review processes.

Transparency Obligations under Articles 13 through 15 require organizations using AI for automated decisions or profiling to inform individuals of the existence of automated decision-making, the logic involved in terms they can understand, the significance and envisaged consequences, the categories of personal data used, and the right to human intervention and to contest decisions.

Profiling, as elaborated in Recital 71, encompasses any form of automated processing used to evaluate personal aspects such as behavior, interests, or preferences. Even profiling that does not produce consequential decisions, including targeted advertising and content recommendation systems, requires transparency. Organizations must disclose profiling activities in their privacy notices.

Data Subject Rights in AI Context

The GDPR's data subject rights acquire new dimensions when applied to AI systems.

The Right of Access under Article 15 entitles individuals to a copy of their personal data. In the AI context, this means providing information about what data was used to train or operate models that affect the requesting individual. The practical approach is to furnish data pertaining to that specific individual rather than exposing entire training datasets.

The Right to Rectification under Article 16 requires correction of inaccurate personal data. Where incorrect data has materially influenced model outputs, rectification may necessitate model retraining. Organizations should document their processes for handling correction requests and their downstream impact on AI systems.

The Right to Erasure, often called the "Right to Be Forgotten" under Article 17, requires deletion of personal data when it is no longer necessary, when consent has been withdrawn, or when processing was unlawful. For AI, this presents a fundamental technical challenge: one cannot easily erase an individual's influence from a trained model. Available approaches include ceasing to use the model and retraining from scratch, deploying emerging machine unlearning techniques, demonstrating that data has been deleted from training datasets and excluded from future models, or anonymizing data to the point where the erasure obligation no longer applies.

The Right to Data Portability under Article 20 entitles individuals to receive their personal data in a structured, machine-readable format and to transfer it to another controller. In the AI context, this applies to the data used as input to AI systems, not to the AI model itself. However, an individual's profile as constructed by AI may itself be portable.

The Right to Object under Article 21 allows individuals to object to processing based on legitimate interest or for direct marketing purposes. In AI terms, individuals may object to profiling for marketing. Organizations relying on legitimate interest must demonstrate compelling grounds that override the individual's interests, or else cease processing.

Data Protection by Design and Default (Article 25)

Privacy by Design requires incorporating privacy safeguards at the AI design stage rather than retrofitting them after deployment. This includes adopting privacy-enhancing technologies such as differential privacy, federated learning, synthetic data generation, and homomorphic encryption.

Privacy by Default requires that default system settings be privacy-protective. Organizations should limit personal data processing to what is necessary by default, for example by defaulting to non-personalized AI recommendations and requiring an affirmative opt-in for personalized experiences.

Data Protection Impact Assessment (DPIA) (Article 35)

A DPIA is required before deploying AI systems that involve systematic and extensive profiling with legal or significant effects, large-scale processing of special category data such as biometric or health information, systematic monitoring of publicly accessible areas (for example, facial recognition in public spaces), or the use of novel AI technologies.

The DPIA should describe the AI processing and its purposes, assess necessity and proportionality, identify risks to individuals' rights and freedoms, specify mitigation measures both technical and organizational, include a bias, discrimination, and fairness analysis, and reflect consultation with the organization's data protection officer where applicable.

DPIAs should be conducted before deployment, updated whenever the AI system changes significantly, and documented for retention and potential regulatory review.

Cross-Border Data Transfers

Global AI deployments inevitably involve cross-border data transfers, whether for training data aggregation, model hosting, or inference processing. The GDPR restricts transfers of personal data outside the European Economic Area unless adequate protections are in place.

Adequacy Decisions under Article 45 allow the European Commission to recognize certain countries as providing a level of data protection essentially equivalent to the EU's. The UK, Switzerland, Japan, and Canada (for commercial activities) are among the jurisdictions that have received adequacy status, enabling free data flows.

Standard Contractual Clauses (SCCs) under Article 46 are EU Commission-approved contracts between data exporters and importers. The updated 2021 SCCs introduced a requirement for transfer impact assessments, obliging organizations to evaluate whether the destination country's legal framework undermines the protections the SCCs are designed to provide. Supplementary measures such as encryption and data minimization may be necessary to bridge identified gaps.

Binding Corporate Rules (BCRs) under Article 47 provide a mechanism for multinational corporate groups to establish internal policies governing cross-border transfers. BCRs require approval from a lead data protection authority and involve a complex, time-consuming approval process.

US Transfers Post-Schrems II remain particularly challenging. The Court of Justice of the European Union invalidated the EU-US Privacy Shield in 2020, and US surveillance laws continue to create complications for organizations relying on SCCs. Available options include SCCs supplemented with additional technical and organizational measures, minimizing transfers to the US, or pursuing data localization strategies.

California Privacy Laws: CPRA and CCPA

California Privacy Rights Act (CPRA)

The CPRA, which significantly amended the CCPA, introduced several provisions with direct relevance to AI systems.

The Right to Know under Section 1798.100 enables consumers to request the categories and specific pieces of personal information collected about them. For AI, this means organizations must be able to disclose what data has been used for AI training or inference concerning the requesting consumer.

The Right to Delete under Section 1798.105 allows consumers to request deletion of their personal information, subject to exceptions for completing transactions, detecting fraud, exercising free speech, or complying with legal obligations. AI systems face the same technical challenges here as under the GDPR's right to erasure.

The Right to Opt-Out of Sale and Sharing under Sections 1798.120 and 1798.135 defines "sale" as disclosing personal information for monetary or other valuable consideration, and "sharing" as disclosing personal information for cross-context behavioral advertising. Sharing data with AI vendors or using AI for targeted advertising may trigger this opt-out right. Organizations must provide a clearly labeled "Do Not Sell or Share My Personal Information" link.

The Right to Correct under Section 1798.106 enables consumers to request correction of inaccurate personal information, which in the AI context may require retraining or updating models.

The Right to Limit Use of Sensitive Personal Information under Section 1798.121 covers categories including Social Security numbers, financial account information, biometric data, precise geolocation, health data, and information about sex life. Consumers can limit use of this information to purposes that are strictly necessary, meaning organizations cannot deploy sensitive personal information for profiling or targeted advertising without appropriate consent.

Automated Decision-Making Rights under Section 1798.185(a)(16) represent the CPRA's most significant AI-specific provisions. Consumers gain the right to meaningful information about the logic involved in automated decision-making when decisions produce legal or similarly significant effects. The right to opt out of profiling under Section 1798.135(b)(3) applies to automated decision-making technology, including profiling, when decisions affect credit, employment, housing, education, healthcare, or insurance eligibility and pricing. When an automated decision produces a legal or similarly significant effect, consumers are entitled to request a review of the decision, to appeal, and to obtain human review and explanation.

Implementation requires providing an opt-out mechanism analogous to the "Do Not Sell" link, building a human review process for consequential automated decisions, and training staff to handle automated decision appeals.

Enforcement rests with the California Privacy Protection Agency (CPPA). Administrative fines reach up to $2,500 per violation and $7,500 per intentional violation. A private right of action exists for certain data breaches, with statutory damages of $100 to $750 per consumer per incident. Class actions can produce exposure in the billions of dollars.

CCPA vs. CPRA: Key Differences for AI

AspectCCPA (2020-2022)CPRA (2023+)
Automated DecisionsNo specific provisionsOpt-out right, access to logic, human review
Sensitive DataNot definedSpecial category with use limitations
Risk AssessmentsNot requiredAnnual cybersecurity audits for some high-risk processing
EnforcementAttorney General onlyNew agency (CPPA) with rulemaking authority
Penalties$2,500 / $7,500 per violationSame, but more active enforcement

Other US State Privacy Laws

Virginia Consumer Data Protection Act (VCDPA)

The VCDPA, effective January 2023, grants consumers the right to opt out of profiling in furtherance of decisions that produce legal or similarly significant effects. The statute defines profiling as any form of automated processing used to evaluate personal aspects. It also requires Data Protection Assessments (DPAs) for profiling that risks legal or significant effects, sensitive data processing, targeted advertising, and the sale of personal data. These assessments must identify and weigh benefits against risks and be made available to the Attorney General upon request. Enforcement is through the Attorney General, with penalties of up to $7,500 per violation. There is no private right of action.

Colorado Privacy Act (CPA)

The CPA, effective July 2023, provides consumers with the right to opt out of profiling that produces legal or similarly significant effects and requires organizations to disclose profiling activities in their privacy notices. Impact assessments are required for high-risk data processing, including profiling with legal or significant effect risks, sensitive data processing, targeted advertising, and data sales. Colorado's assessment requirements are more detailed than Virginia's, and organizations must document and retain assessments for regulators. Enforcement rests with the Attorney General, with monetary penalties per violation and cure periods that may apply.

Connecticut Data Privacy Act (CTDPA)

The CTDPA, effective July 2023, follows a structure similar to Virginia's. It provides a profiling opt-out for decisions with legal or similarly significant effects, requires data protection assessments for high-risk processing, and vests enforcement authority in the Attorney General. There is no private right of action.

Utah Consumer Privacy Act (UCPA)

The UCPA, effective December 2023, takes a notably more business-friendly approach. It contains no automated decision-making specific provisions and imposes no risk assessment requirements, making it the least burdensome of the current state privacy laws for AI deployers.

Multi-State Compliance Strategy

Across these state laws, several common elements emerge. All provide rights of access, deletion, correction, and opt-out of targeted advertising and sale. All require transparency through privacy notices and disclosures. All provide enhanced protections for sensitive data categories including biometric, health, and precise location data. And most provide opt-out rights when profiling produces legal or similarly significant effects.

The most efficient compliance strategy is to harmonize to the strictest requirements, which in most cases means treating California's CPRA as the baseline. This approach covers all states while reducing operational complexity. Organizations should implement opt-out mechanisms for automated decisions producing legal or significant effects, conduct risk and impact assessments for high-risk AI, provide meaningful information about automated decision logic, and enable human review for consequential automated decisions.

International Privacy Frameworks

UK GDPR and Data Protection Act 2018

The post-Brexit UK GDPR remains substantially identical to the EU GDPR. The UK Information Commissioner's Office (ICO) has issued guidance emphasizing fairness, transparency, and accountability as the governing principles for AI systems. The lawful basis requirements mirror those of the EU, and transparency and explainability expectations are calibrated proportionate to risk. Article 22 continues to apply to solely automated decisions with legal or similarly significant effects, and data protection by design obligations extend fully to AI.

The ICO can impose fines of up to 17.5 million pounds or 4% of global annual turnover, whichever is greater.

Canada PIPEDA and Proposed AI Act

Under the current framework established by PIPEDA, organizations must obtain consent for the collection, use, and disclosure of personal information and must specify and limit the purposes of processing. Individuals can challenge the accuracy of information held about them, and regulators have expressed emerging expectations for meaningful explanations of automated decisions.

The proposed Consumer Privacy Protection Act (Bill C-27) would modernize PIPEDA with explicit automated decision-making provisions, including a right to explanation of predictions, recommendations, and decisions, a right to request human review, and requirements for algorithmic impact assessments for high-risk systems.

The proposed Artificial Intelligence and Data Act (AIDA) would introduce a risk-based AI regulatory framework distinguishing high-risk from general-purpose systems, with requirements for anonymization and bias mitigation. AIDA contemplates significant administrative and potentially criminal penalties for non-compliance.

Brazil LGPD (Lei Geral de Proteção de Dados)

Brazil's LGPD, effective September 2020, mirrors the GDPR in many respects. It establishes legal bases for processing including consent and legitimate interest, provides data subject rights covering access, correction, deletion, and portability, requires data protection by design, and mandates data protection impact assessments.

Article 20 of the LGPD specifically addresses automated decision-making, granting individuals the right to request review of automated decisions and the right to information about the criteria and procedures used. The ANPD (Autoridade Nacional de Protecao de Dados) retains authority to issue further regulations.

Penalties reach up to 2% of revenue in Brazil, capped per violation, with daily fines and the potential for suspension of processing activities.

Practical Compliance Framework for AI

Phase 1: Data Mapping and Classification

The foundation of AI privacy compliance is a comprehensive understanding of what personal data your AI systems process and how they process it. Begin by inventorying all AI and ML systems that touch personal data, documenting the complete data flow from collection through training, inference, and storage, and identifying which categories of personal data are involved, distinguishing regular personal data from special or sensitive categories.

Next, map the sources of that data. First-party data collected directly from individuals carries different compliance considerations than third-party data that has been purchased, scraped, or drawn from public datasets. Synthetic or anonymized data may fall outside the scope of privacy regulation entirely, though the quality of anonymization must be rigorously assessed.

Finally, classify each AI system by risk level. High-risk systems are those making automated decisions with legal or similarly significant effects, processing special category data, or operating at large scale. Medium-risk systems include those engaged in profiling, targeted advertising, or other substantial processing of personal data. Low-risk systems are limited to internal analytics on anonymized data with no consequential decision-making.

With a clear data map in hand, the next step is to establish and document the legal basis for each AI system's processing activities. Under the GDPR, this means identifying the applicable Article 6 basis, whether consent, legitimate interest, contract performance, or another ground. Under the CPRA and other state laws, this means ensuring compliance with purpose limitation principles and the rules governing sensitive data.

For AI systems relying on legitimate interest, organizations should prepare formal Legitimate Interest Assessments that address four questions: what is the AI trying to achieve, is AI necessary to achieve that purpose, how do the organization's interests balance against the individual's rights and expectations, and what safeguards are in place to mitigate risks.

Privacy notices must then be updated to disclose AI use, processing purposes, and data categories. They should explain automated decision-making and profiling activities, describe data subject rights including opt-out, access, deletion, and human review options, and provide contact information for privacy inquiries.

Phase 3: Implement Data Subject Rights

Organizations must build operational processes to fulfill data subject rights in the AI context. For access requests, this means establishing a process for identifying an individual's data within training and operational datasets, providing meaningful information about AI decisions affecting them, and explaining the logic, significance, and consequences of automated decisions.

For deletion requests, organizations must remove the individual from operational databases, document the removal from training datasets, and address the challenge of models already trained on that data. Where retraining is feasible, it should be pursued. Where it is not, organizations should document the technical infeasibility, implement compensating safeguards, and evaluate emerging machine unlearning techniques.

Opt-out mechanisms must include the "Do Not Sell or Share" functionality required by California and similar state provisions, opt-out of profiling for decisions with legal or significant effects, and preference management tools. Organizations must honor opt-outs in both AI training and inference pipelines where required.

Human review processes require identifying which consequential automated decisions must offer a human review option, training human reviewers on how the AI system operates and how to exercise meaningful override authority, documenting review decisions and their rationale, and providing appeal mechanisms.

Phase 4: Conduct Risk Assessments

DPIAs, impact assessments, and algorithmic assessments are becoming mandatory for high-risk AI across jurisdictions. Each assessment should be conducted before deploying a high-risk AI system and should document the AI system description and purposes, the personal data processed, a necessity and proportionality analysis, risks to individuals including discrimination, errors, and privacy intrusion, and the mitigation measures adopted, both technical and organizational.

Assessments are not one-time exercises. They should be reviewed annually or whenever triggered by significant changes to the AI system. Organizations should monitor AI fairness, bias, and accuracy on an ongoing basis and update risk assessments to reflect lessons learned.

Phase 5: Vendor Management

As organizations increasingly rely on third-party AI products and services, vendor management becomes a critical compliance function. Due diligence should assess vendor data privacy and security practices, review the sources and legal basis of vendor AI training data, and map data flows to clarify controller and processor roles.

Data Processing Agreements, required under GDPR Article 28 for relationships with data processors, must specify processing purposes, data types, and security measures. They should include audit rights and subprocessor approval mechanisms, and they must address cross-border transfers where applicable.

For third-party AI models, organizations need to understand what personal data the vendor's models were trained on, secure contractual protections for their own data when used with vendor AI, and allocate liability for privacy violations that may arise from the vendor relationship.

Key Takeaways

The GDPR establishes the most comprehensive AI data privacy framework in force today, encompassing lawful basis requirements, purpose limitation, data minimization, automated decision-making rights under Article 22, and data protection by design obligations, with fines reaching the greater of 20 million euros or 4% of global annual turnover.

California's CPRA adds AI-specific rights that go beyond the original CCPA, including opt-out of automated decision-making producing legal or significant effects, access to the logic used in decisions, limits on the use of sensitive personal information, and mandatory human review options for consequential automated decisions.

The right to erasure creates a genuine technical dilemma for AI operators, as "untraining" individual data from model weights is difficult or impossible with current technology. This reality pushes organizations toward retraining, machine unlearning research, or carefully documented positions on technical infeasibility.

Automated decisions with legal or similarly significant effects trigger heightened obligations across both the GDPR and US state privacy laws, including rights to human intervention, opt-outs, meaningful explanations, and appeal mechanisms. These obligations apply regardless of the jurisdiction, making them a universal compliance priority for any AI system that influences consequential outcomes.

Profiling for any purpose requires transparency. Even non-consequential profiling, such as recommendations and targeted advertising, must be disclosed in privacy notices under the GDPR and several US state privacy laws. Organizations that fail to disclose profiling activities expose themselves to enforcement action even where the profiling itself is otherwise lawful.

Multi-state US compliance benefits most from a harmonization strategy that aligns to the strictest requirements, which are often those of the CPRA, while tracking state-level nuances in definitions, exemptions, and enforcement approaches. This "comply to the ceiling" strategy reduces duplication and simplifies governance.

Risk and impact assessments, including DPIAs, Data Protection Assessments, and algorithmic impact assessments, are rapidly becoming mandatory for high-risk AI across jurisdictions and are central to demonstrating the robust AI governance that regulators increasingly expect.

Common Questions

Public availability does not remove GDPR obligations. You still need a lawful basis (often legitimate interest), must respect purpose limitation, and individuals retain data subject rights. Scraping personal data from websites may be unlawful without adequate legal basis. Document a legitimate interest assessment and ensure transparency where feasible.

Under GDPR, legitimate interest can often justify product recommendations if they are expected by customers and do not override their rights. This is profiling and must be disclosed in privacy notices. Under CPRA, such recommendations usually do not trigger automated decision opt-out rights but may be constrained by sensitive personal information rules. Clearly document your legal basis and disclosures.

You can retrain models without the individual's data, remove their data from training sets for future models, explore machine unlearning, or, if truly infeasible, document technical impossibility and apply compensating controls. Building systems with deletion in mind (e.g., modular models, federated learning) reduces future conflicts between erasure rights and model persistence.

Article 22 restricts solely automated decisions with legal or similarly significant effects, such as credit decisions, unless they are necessary for a contract, authorized by law, or based on explicit consent. Even when allowed, you must provide human intervention, allow individuals to express their views and contest decisions, and give meaningful information about the logic used.

Typical examples include decisions about credit, employment, insurance, housing, education, healthcare, and government benefits. Low-stakes personalization like basic recommendations usually does not qualify, but edge cases such as content moderation or platform access decisions may. When in doubt, treat the decision as high-impact and apply heightened safeguards.

Yes. Many organizations adopt a single, global privacy notice aligned to the strictest standards (often GDPR and CPRA) and add state- or region-specific sections where needed. This approach simplifies operations and provides a consistent experience, while still honoring jurisdiction-specific rights and definitions.

GDPR limits transfers of personal data outside the EEA unless the destination has an adequacy decision or appropriate safeguards like SCCs are in place. This constrains hosting EU training data in some jurisdictions and may require EU-only data centers, SCCs plus transfer impact assessments, encryption, and data minimization, or architectures like federated learning to avoid transfers.

AI-Specific Privacy Challenges

AI amplifies traditional privacy risks because models depend on large, rich datasets and often make consequential decisions. Training data can include millions of individuals, purposes may drift as models are reused, and technical limits on explainability and untraining make it harder to fully honor rights like access, erasure, and objection. Third-party AI services further complicate controller/processor roles and accountability.

€20M or 4%

Maximum GDPR fine for serious violations, including unlawful AI processing

Source: General Data Protection Regulation (GDPR) - Regulation (EU) 2016/679

"A well-executed DPIA is not just a compliance checkbox; it is the backbone of AI governance, surfacing risks early, informing design decisions, and providing evidence of due diligence to regulators and stakeholders."

AI Governance & Privacy Practice

References

  1. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
  2. General Data Protection Regulation (GDPR) — Official Text. European Commission (2016). View source
  3. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.