Back to Insights
AI Compliance & RegulationGuide

Philippines NPC AI Guidelines: Data Privacy Act Compliance for AI Systems

February 12, 202611 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:CISOConsultantLegal/ComplianceCTO/CIOCHROIT Manager

The Philippines National Privacy Commission issued Advisory Guidelines on AI in December 2024, requiring organizations to identify and limit algorithmic bias, prohibit AI washing, and comply with the Data Privacy Act for all AI data processing.

Summarize and fact-check this article with:
Filipino data privacy professional reviewing compliance documentation

Key Takeaways

  • 1.NPC Advisory Guidelines (December 2024) clarify how the Data Privacy Act applies to AI
  • 2.Organizations must identify, monitor, and limit three types of algorithmic bias: systemic, human, and statistical
  • 3.AI washing is prohibited — cannot misrepresent AI involvement in data processing
  • 4.NPC has authority to audit AI systems and investigate complaints about algorithmic bias
  • 5.Multiple AI bills pending in Congress including AIDA (Artificial Intelligence Development Authority)
  • 6.National AI Strategy (NAIS-PH) approved May 2025 with 5 pillars through 2028

What Are the NPC AI Guidelines?

On 19 December 2024, the Philippines National Privacy Commission (NPC) issued NPC Advisory No. 2024-04, providing Guidelines on the Application of the Data Privacy Act on AI Systems Processing Personal Data. The advisory represents the most significant regulatory signal on artificial intelligence governance to emerge from a Southeast Asian privacy authority in the past two years, clarifying how the existing Data Privacy Act of 2012 (Republic Act No. 10173) applies to AI systems that process personal data.

Although the guidelines are formally advisory rather than statutory, their practical weight should not be underestimated. They interpret mandatory obligations already embedded in the Data Privacy Act, and the NPC retains full authority to audit AI systems, investigate complaints regarding algorithmic bias, and enforce the DPA through compliance orders and penalties. Organizations operating AI systems in the Philippines should also note that NPC Circular 16-03 imposes a 72-hour breach notification requirement, creating an additional layer of urgency for enterprises that handle personal data at scale.

Key Provisions

Algorithmic Bias Requirements

The guidelines establish a structured framework for addressing bias in AI systems that process personal data. At its core, the advisory requires Personal Information Controllers (PICs) to actively identify, monitor, and limit biases across three recognized categories: systemic bias embedded in training data, institutional processes, or societal structures; human bias introduced through decisions made during AI development and deployment; and statistical bias arising from data collection, sampling, or modeling techniques.

The NPC takes a pragmatic stance on remediation. Rather than demanding the elimination of all bias, an outcome widely recognized as technically impractical, the guidelines require organizations to "limit" biases through meaningful, documented effort. This framing gives enterprises operational flexibility while still imposing a genuine duty of care.

The guidelines are especially pointed on the question of harm. AI systems must not produce manipulative or unduly oppressive outcomes for data subjects. This provision carries particular relevance for applications in credit scoring and lending decisions, employment screening and hiring, insurance underwriting and claims processing, and customer segmentation and pricing, all areas where algorithmic decisions can materially affect individuals' economic prospects.

AI Washing Prohibition

In a provision that reflects growing regulatory concern across multiple jurisdictions, the NPC explicitly prohibits "AI washing," the practice of misrepresenting the extent to which AI is involved in data processing. Organizations must accurately describe AI involvement in their privacy notices, refrain from overstating or understating AI's role in decision-making, and provide truthful information about how AI processes personal data. For enterprises that have marketed AI capabilities ahead of actual deployment, this requirement introduces meaningful disclosure risk.

NPC Audit Authority

The enforcement architecture of the guidelines gives the NPC broad investigative and remedial powers. The Commission may audit AI systems for compliance with the Data Privacy Act, investigate complaints related to algorithmic bias or discriminatory profiling, issue compliance orders requiring changes to AI system design or operation, and impose penalties for DPA violations. Taken together, these powers create a credible enforcement mechanism that should inform enterprise risk assessments for Philippine operations.

The Data Privacy Act Foundation

The NPC AI guidelines do not exist in isolation. They build directly on the Data Privacy Act's existing requirements, extending established privacy principles into the domain of algorithmic processing.

The DPA's consent framework applies with full force to AI systems. Valid consent is required for all personal data processing, with explicit consent mandated for sensitive personal data categories including health information, biometrics, and racial or ethnic origin. Critically, consent must be informed, meaning individuals must understand how AI-related processing affects their data. For organizations deploying complex machine learning models, meeting this standard requires clear, accessible communication about algorithmic decision-making.

Data Subject Rights

The DPA grants individuals a comprehensive set of rights that extend to AI-driven processing: the right to be informed about data processing including AI use, the right to access personal data held by the organization, the right to object to processing including automated processing, the right to erasure of personal data, the right to data portability, and the right to damages for privacy violations. For enterprises operating AI at scale, the right to object to automated processing and the right to erasure present the most significant operational challenges, as they may require the ability to unwind algorithmic decisions or remove data from trained models.

Proportionality and Legitimate Purpose

AI data processing must satisfy the DPA's proportionality standard, meaning it must be proportionate to its declared purpose. Data collection is limited to what is necessary, and purpose limitation applies strictly: data collected for one purpose cannot be repurposed for AI training without obtaining additional consent. This provision has direct implications for organizations seeking to leverage customer data to build or refine AI models.

Security Requirements

The DPA mandates reasonable and appropriate organizational, physical, and technical measures to protect personal data processed by AI systems. Regular security assessments and data breach notification requirements apply, creating a baseline security obligation that scales with the sensitivity of the data being processed.

Proposed AI Legislation

Beyond the NPC guidelines, the Philippine legislative landscape is evolving rapidly, with multiple AI-specific bills under consideration that could reshape the regulatory environment.

House Bill No. 1196 (AIDA)

The Artificial Intelligence Development Authority bill would establish a dedicated AI governance body under the Department of Science and Technology (DOST). The proposed legislation addresses national AI strategy development, regulatory standards for AI systems, compliance management frameworks, and transparency requirements for General-Purpose AI (GPAI). If enacted, AIDA would create a centralized regulatory counterpart to the NPC's data privacy jurisdiction, potentially producing overlapping compliance obligations for AI operators.

House Bill No. 3195 / Senate Bill No. 852

A parallel legislative effort would establish a Philippine Council on Artificial Intelligence alongside a formal AI Bill of Rights, signaling legislative intent to codify algorithmic accountability at the constitutional level.

National AI Strategy (NAIS-PH)

President Marcos Jr. approved the National AI Strategy on 20 May 2025, establishing a whole-of-government framework through 2028 built on five pillars: infrastructure development, workforce capacity building, innovation ecosystem cultivation, ethical policy frameworks, and strategic deployment in priority sectors. The strategy provides the policy backdrop against which both the NPC guidelines and proposed legislation should be understood.

How to Comply

Achieving compliance with the NPC AI guidelines requires a structured, phased approach. The following roadmap reflects the advisory's priorities and the underlying DPA obligations.

Step 1: DPA Compliance Review

The starting point is a thorough review of existing data privacy practices as they relate to AI. Organizations should revisit their Privacy Impact Assessments to ensure AI systems are adequately covered, verify that consent mechanisms explicitly address AI-related data processing, update privacy notices to accurately describe AI involvement (directly addressing the AI washing prohibition), and confirm that data subject rights procedures can accommodate AI-specific requests such as objections to automated decision-making.

Step 2: Algorithmic Bias Assessment

With the compliance baseline established, attention should turn to bias. This means identifying potential sources of systemic, human, and statistical bias across all AI systems, implementing monitoring mechanisms for ongoing bias detection, establishing clear procedures for bias mitigation when issues surface, and maintaining thorough documentation of bias assessments and remediation efforts. Documentation is particularly important given the NPC's audit authority; organizations that can demonstrate a good-faith, systematic approach to bias management will be better positioned in any regulatory engagement.

Step 3: Transparency Implementation

The guidelines' emphasis on accurate disclosure requires organizations to move beyond boilerplate privacy language. Enterprises should accurately describe AI involvement in privacy notices and terms of service, provide accessible information about how AI affects individual outcomes, implement dedicated channels for data subjects to ask questions about AI-driven decisions, and train customer-facing staff to handle AI-related inquiries competently.

Step 4: NPC Audit Readiness

Finally, organizations should prepare for the possibility of regulatory scrutiny. This means maintaining comprehensive documentation of AI governance practices, recording bias assessment methodologies and their results, keeping detailed records of all data processing activities related to AI, and assembling the materials necessary to respond promptly to NPC inquiries or formal audits.

How Philippine Requirements Compare with Regional Frameworks

Philippines versus Singapore PDPA. The Philippine Data Privacy Act of 2012 (Republic Act 10173) predates Singapore's Personal Data Protection Act amendments, but the NPC's 2024 enforcement guidelines introduce extraterritorial provisions comparable to GDPR Article 3. Philippine requirements mandate seventy-two-hour breach notification, a standard more stringent than Singapore's notification obligation, which permits reasonable assessment periods before disclosure is required.

Philippines versus Indonesia UU PDP. Indonesia's Personal Data Protection Law (enacted October 2024) shares structural similarities with the Philippine framework but delegates detailed enforcement guidelines to Government Regulation that remained pending as of March 2026. Philippine organizations benefit from more mature enforcement precedent through existing NPC decisions and advisory opinions, giving them greater regulatory clarity in the near term.

Philippines versus Thailand PDPA. Thailand's Personal Data Protection Act enforcement commenced June 2022, with supplementary artificial intelligence governance provisions under the draft Digital Economy and Society Ministry AI Act circulated November 2025. Philippine requirements currently provide more specific automated decision-making protections but lack Thailand's emerging sector-specific AI governance framework. For multinational enterprises operating across ASEAN, the divergence between these national approaches underscores the importance of jurisdiction-by-jurisdiction compliance mapping.

Practical Compliance Roadmap for Philippine Enterprises

The path from awareness to compliance requires concrete, sequenced action. Organizations should begin by inventorying all generative tool deployments across departments, documenting which platforms process employee and customer personal information. From there, enterprises should conduct Privacy Impact Assessments for each deployment using NPC-recommended templates available through the NPC website (privacy.gov.ph), ensuring that assessments reflect the specific risks associated with AI-driven processing.

Privacy notices must then be updated to disclose automated processing activities, including the specific purposes and categories of personal data involved. Organizations should also establish robust data subject rights procedures so that individuals can request human review of automated decisions, obtain explanations of algorithmic logic, and exercise objection rights under Section 34 of the Data Privacy Act.

Finally, enterprises should implement a data localization review, assessing whether generative platform API endpoints route Philippine personal data through servers located outside the country and documenting applicable transfer mechanisms. For organizations relying on cloud-hosted AI services, this step is especially critical, as cross-border data flows may trigger additional DPA requirements that are easy to overlook in the rush to deploy.

Common Questions

The guidelines themselves are advisory. However, they clarify how the mandatory Data Privacy Act applies to AI systems. The obligations they describe — consent, data subject rights, security, proportionality — ARE mandatory under the DPA. The NPC can audit AI systems and enforce DPA compliance.

AI washing means misrepresenting the extent of AI involvement in data processing. For example, claiming a decision was made by AI when it was human-made (or vice versa), or overstating AI capabilities in privacy notices. The guidelines prohibit this to ensure individuals have truthful information about how their data is processed.

Yes. The NPC has the authority to audit AI systems for compliance with the Data Privacy Act. They can investigate complaints about algorithmic bias, discriminatory profiling, or other DPA violations. Organizations should maintain documentation of their AI governance practices and bias assessment results.

Not yet. Multiple AI bills are under consideration in Congress (including House Bill No. 1196 establishing AIDA). The National AI Strategy was approved in May 2025. For now, AI is regulated through the existing Data Privacy Act as interpreted by the NPC guidelines. Dedicated legislation may come in the next 1-2 years.

The NPC guidelines identify three types: systemic bias (embedded in training data or societal structures), human bias (introduced through development decisions), and statistical bias (from data collection or modeling). Organizations must identify, monitor, and limit all three types in their AI systems.

References

  1. NPC Advisory No. 2024-04: Guidelines on Application of DPA to AI Systems. Philippines National Privacy Commission (2024). View source
  2. Data Privacy Act of 2012 (Republic Act No. 10173). Government of the Philippines (2012). View source
  3. National AI Strategy for the Philippines (NAIS-PH). Department of Science and Technology (DOST) (2025). View source
  4. NPC Circular 16-03: Personal Data Breach Management. Philippines National Privacy Commission (2016). View source
  5. NPC Advisories & Circulars. Philippines National Privacy Commission (2024). View source
  6. House Panel Tackles AI Governance Bill (HB 1196 — AIDA). BusinessMirror (2025). View source
  7. Summary: Philippines Data Privacy Act and Implementing Regulations. IAPP (2024). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Compliance & Regulation Solutions

INSIGHTS

Related reading

Talk to Us About AI Compliance & Regulation

We work with organizations across Southeast Asia on ai compliance & regulation programs. Let us know what you are working on.