What Is the Indonesia PDP Law?
Law No. 27 of 2022 on Personal Data Protection (Undang-Undang Perlindungan Data Pribadi, or UU PDP) is Indonesia's first comprehensive data protection law. Modeled on the EU's GDPR, it was enacted on 17 October 2022 and became fully effective on 17 October 2024 after a two-year grace period.
For AI companies, the PDP Law is the primary legal framework governing how personal data is used in AI development, training, and deployment. A dedicated PDP Agency is planned to be operational by 2026 to enforce the law.
Why Indonesia Matters for AI Compliance
Indonesia is Southeast Asia's largest economy, home to over 270 million people and experiencing rapidly accelerating digital adoption. The country's digital economy is growing at over 20% annually, making it one of the fastest-expanding digital markets in the region.
The Indonesian government has signaled clear intent to build a structured AI governance environment. In August 2020, it launched the Stranas KA (National AI Strategy), a long-range development roadmap targeting progress through 2045. Sector-specific oversight is already taking shape as well. The OJK (Financial Services Authority) has issued AI governance guidelines specifically for financial institutions, establishing binding expectations for banks and fintech companies operating in the market.
Perhaps most consequentially, a Presidential Regulation (Perpres) on AI Ethics and Safety is expected in 2026, pushed back from an original 2025 target. This regulation will mark Indonesia's shift from voluntary AI governance principles to mandatory requirements. For companies building or deploying AI in Indonesia, the compliance window is narrowing.
Personal Data Categories Under PDP Law
The PDP Law draws a clear distinction between general and specific (sensitive) personal data, and understanding this classification is essential for any AI system processing Indonesian user data.
General Personal Data
General personal data under the PDP Law encompasses the foundational identifiers most AI systems will encounter. This includes an individual's full name, gender, nationality, religion, and marital status. The law also captures any combination of personal data points that, taken together, can identify a specific person. This broad aggregation clause is particularly relevant for AI systems that merge multiple data sources during training or inference.
Specific (Sensitive) Personal Data
Specific personal data carries a higher protection threshold and stricter processing requirements. This category covers health data and information, biometric data, genetic data, criminal records, children's data, personal financial data, and any other data types that may be designated by future regulations.
For AI systems, the biometric, genetic, health, financial, and children's data categories demand particular attention. AI applications in healthcare diagnostics, fintech credit scoring, biometric security, and educational technology frequently process one or more of these sensitive categories, triggering the PDP Law's heightened consent and safeguarding obligations.
Core Requirements for AI Systems
Lawful Basis for Processing
Like the GDPR, the PDP Law requires organizations to establish a lawful basis before processing personal data. The law recognizes six bases. Consent requires explicit, informed agreement from the data subject. Contractual necessity applies when processing is required to perform a contract with the individual. Legal obligation covers processing mandated by Indonesian law. Vital interests permits processing necessary to protect someone's life. Public interest allows processing carried out in service of a recognized public function. Finally, legitimate interests permits processing where the organization's interests are not overridden by the individual's rights, though this requires a documented balancing test.
Consent Requirements
When an AI system relies on consent as its lawful basis, the PDP Law sets a high bar. Consent must be specific, informed, and unambiguous, leaving no room for pre-checked boxes or bundled permissions. Individuals retain the right to withdraw consent at any time, and organizations must make withdrawal as straightforward as the original opt-in. For sensitive data categories, the law requires explicit consent, which demands a clear affirmative action from the data subject. Organizations using personal data for AI training purposes should take particular care to explain clearly how data will be collected, processed, and applied within their models.
Data Controller and Data Processor
The PDP Law establishes distinct roles and responsibilities for data controllers and data processors. Data controllers are the entities that determine the purposes and means of processing; in an AI context, this is typically the company deploying the AI system. Data processors handle data on behalf of controllers and include AI vendors, cloud infrastructure providers, and third-party model hosts.
Both roles carry specific obligations under the law. Critically, data controllers cannot fully delegate their compliance responsibilities to processors. An organization deploying AI through a third-party vendor remains accountable for how that vendor handles personal data.
Cross-Border Data Transfer
Personal data may only be transferred outside Indonesia under specific conditions. The destination country must maintain adequate data protection laws, or the transferring organization must put adequate safeguards in place through contractual clauses or binding corporate rules. Alternatively, the data subject may provide informed consent to the transfer.
This provision has direct implications for AI systems that rely on cloud infrastructure hosted outside Indonesia, including major platforms operated by global hyperscalers. Companies must evaluate their data flows and ensure that any cross-border transfer meets at least one of the PDP Law's conditions.
Data Subject Rights
The PDP Law grants individuals a comprehensive set of rights over their personal data. Individuals have the right to be informed about how their data is being processed, the right to access their personal data held by an organization, and the right to rectify any inaccuracies. They may also exercise the right to delete their personal data, the right to restrict processing under certain conditions, and the right to data portability, allowing them to obtain and reuse their data across services.
Of particular significance for AI deployments is the right to object to profiling and automated decision-making. Any AI system that makes or materially influences decisions about individuals (credit approvals, insurance underwriting, hiring recommendations) must be designed to accommodate this right, including the ability to provide meaningful human review when requested.
OJK AI Guidelines for Financial Services
The OJK (Otoritas Jasa Keuangan) published its AI Governance for Indonesian Banking guidelines on 29 April 2025. These guidelines are mandatory for financial institutions operating in Indonesia and represent one of the region's most concrete sector-specific AI governance frameworks.
Six Basic Principles
The OJK framework is built on six foundational principles. First, AI systems must be based on Pancasila, aligning with Indonesia's national philosophical framework. Second, AI must be beneficial, creating genuine value for customers and society rather than serving solely institutional interests. Third, AI must be fair and just, designed and monitored to prevent discrimination across protected groups. Fourth, organizations must ensure accountability, maintaining clear lines of responsibility for AI outcomes. Fifth, AI systems must be transparent and explainable, with decisions that can be understood by affected individuals and regulators. Sixth, AI must be resilient and secure, built with robust safeguards against failure and cyber threats.
Key Focus Areas
The OJK guidelines place particular emphasis on consumer protection within AI-driven financial services, requiring institutions to demonstrate that automated systems serve customer interests. Model and data reliability standards apply directly to credit scoring and risk assessment, where flawed AI outputs can have severe consequences for individuals and systemic stability. The guidelines also address financial inclusion, requiring institutions to ensure that AI does not inadvertently exclude underserved populations from accessing services. Compliance with the PDP Law's data protection requirements is explicitly referenced, and the OJK expects financial institutions to maintain strong cyber resilience across all AI systems.
Upcoming: Perpres on AI Ethics and Safety
A Presidential Regulation (Perpres) on AI Ethics and Safety is expected in 2026, pushed back from its original 2025 timeline. According to reporting by The Jakarta Post, the regulation was reported to be 90% complete as of late 2025, suggesting that its core provisions are largely settled even as the formal issuance date shifts.
The Perpres represents a fundamental shift in Indonesia's AI governance posture, moving from voluntary guidelines to mandatory requirements with enforcement mechanisms. Expected provisions include mandatory registration of high-risk AI systems, required impact assessments for high-risk AI applications, penalties for non-compliance with registration requirements, and alignment with Indonesia's National AI Strategy (Stranas KA). For organizations currently operating under the assumption that Indonesia's AI rules are advisory, the Perpres will require a significant recalibration of compliance programs.
How to Comply
Step 1: PDP Law Compliance
The foundation of any AI compliance program in Indonesia begins with full adherence to the PDP Law. Organizations should start by identifying every personal data processing activity within their AI systems, mapping data flows from collection through training, inference, and storage. Each processing activity requires a documented lawful basis, whether consent, contractual necessity, legitimate interest, or another recognized ground. Where consent is the chosen basis, organizations must implement robust consent mechanisms that meet the PDP Law's standards for specificity, clarity, and ease of withdrawal. Procedures for handling data subject rights requests (access, deletion, portability, objection to automated decision-making) must be established and tested. Finally, organizations should appoint a data protection officer or designate a responsible internal team to oversee PDP Law compliance on an ongoing basis.
Step 2: AI-Specific Data Governance
Beyond baseline PDP compliance, AI systems require dedicated data governance practices. Organizations should audit all AI training data for the presence of personal data, paying particular attention to sensitive categories such as biometric, health, financial, and children's data. Where feasible, anonymization or pseudonymization techniques should be applied to reduce regulatory exposure without sacrificing model performance. Data provenance documentation is essential; organizations must be able to trace the origin, processing history, and legal basis for every dataset used in AI training. Clear data retention and deletion policies specific to AI data should be established, ensuring that training data is not retained indefinitely without justification.
Step 3: OJK Compliance (Financial Services)
Financial institutions face an additional compliance layer under the OJK's AI governance guidelines. Organizations should map each AI system against the OJK's six foundational principles, identifying gaps and remediation priorities. Fairness monitoring programs should be implemented for credit scoring, risk assessment, and other decision-making systems that directly affect customers. Explainability requirements demand that customer-facing AI decisions can be meaningfully described to both the affected individual and to regulators upon request. Regular audits of AI model performance, including bias testing and accuracy validation, should be conducted and documented.
Step 4: Prepare for Perpres
Even before the Presidential Regulation on AI Ethics and Safety is formally issued, organizations should begin preparation. Monitoring government announcements and draft publications will help teams anticipate specific requirements. Organizations should inventory all AI systems that may be classified as high-risk under the expected framework and begin preparing impact assessments for those applications. Building internal documentation and registration capabilities now will reduce the compliance burden when the regulation takes effect, avoiding the scramble that typically follows new regulatory mandates.
Related Regulations
- Singapore PDPA & AI: Comparable data protection framework with more mature AI guidance
- Malaysia PDPA 2010: Similar evolving data protection with AI implications
- ASEAN AI Governance Guide: Regional principles that Indonesia's framework aligns with
- EU GDPR: The model on which Indonesia's PDP Law is based
Common Questions
Yes. The PDP Law (UU PDP, Law No. 27 of 2022) became fully effective on 17 October 2024 after a two-year grace period. All organizations processing personal data of Indonesian residents must comply. A dedicated PDP enforcement agency is planned to be operational by 2026.
Yes. The PDP Law applies to all processing of personal data, regardless of the technology used. AI systems that collect, store, process, or generate personal data — including for training, inference, and output — must comply with all PDP Law requirements.
A Presidential Regulation (Perpres) on AI Ethics and Safety is expected in early 2026. It was reported to be 90% complete in late 2025. This will establish mandatory requirements for high-risk AI systems, including registration and impact assessment obligations.
Only if the destination country has adequate data protection laws or adequate safeguards are in place. Cross-border transfer of personal data requires compliance with PDP Law provisions. Using cloud-based AI infrastructure outside Indonesia for processing Indonesian personal data must meet these requirements.
No. OJK AI guidelines are mandatory only for financial institutions regulated by OJK — banks, insurance companies, fintech companies, and capital market participants. Non-financial companies should follow the general PDP Law and AIGE-style voluntary guidelines until the Perpres takes effect.
References
- Indonesia: Personal Data Protection Act Enters into Force. Library of Congress (2022). View source
- Artificial Intelligence Governance for Indonesian Banking. OJK (Financial Services Authority) (2025). View source
- Priorities and Challenges of Indonesia's Artificial Intelligence National Strategy (Stranas KA). SAFEnet (2022). View source
- AI Rules Pushed to 2026 as Govt Charts Next Move. The Jakarta Post (2025). View source
- Indonesia — Global AI Ethics and Governance Observatory. UNESCO (2024). View source
- What Are the Consequences of Breaches of Data Protection Law in Indonesia?. SSEK Law Firm (2024). View source
- Highlights of Indonesia's Personal Data Protection Law. Norton Rose Fulbright (2022). View source

