Why AI Governance Matters for Indonesian Companies
As artificial intelligence tools become more widely available and more capable, Indonesian companies face an important question: how do we use AI responsibly? This is not merely an academic concern. Companies that deploy AI tools without clear policies risk data breaches, regulatory violations, reputational damage and erosion of stakeholder trust. Conversely, companies that establish thoughtful AI governance frameworks can use AI confidently, knowing that appropriate safeguards are in place.
AI governance refers to the policies, processes and structures that guide how an organisation adopts and uses artificial intelligence. It encompasses everything from acceptable use policies for individual employees to enterprise-level risk assessment frameworks. For Indonesian companies, AI governance also involves understanding and complying with the country's evolving regulatory landscape, including the National AI Strategy and the Personal Data Protection Law.
This guide provides a practical framework for Indonesian companies looking to establish AI governance. It is written for business leaders, compliance professionals, IT managers and anyone responsible for guiding their organisation's approach to AI adoption.
Indonesia's National AI Strategy: Stranas KA
Indonesia's Strategi Nasional Kecerdasan Artifisial (Stranas KA), or National Artificial Intelligence Strategy, provides the government's vision for AI development and adoption across the country. Understanding this strategy is valuable context for any company developing its own AI governance framework.
The Stranas KA outlines several key priorities, including developing AI talent, supporting AI research and innovation, promoting ethical AI use and leveraging AI for public benefit. The strategy recognises that AI adoption must be balanced with considerations around privacy, fairness, transparency and accountability.
For companies, the Stranas KA signals the government's supportive stance towards AI adoption while also indicating the direction of future regulation. Companies that align their AI governance frameworks with the principles outlined in the national strategy will be well positioned as the regulatory environment evolves.
Key takeaways from the Stranas KA for corporate governance include:
- Ethical AI use is a priority. The government expects organisations to use AI responsibly, with attention to fairness and transparency.
- Data protection is foundational. AI governance must integrate with broader data protection practices.
- Talent development is essential. The strategy emphasises the importance of AI literacy and skills across the workforce.
- Cross-sector collaboration is encouraged. Companies are encouraged to share best practices and contribute to the broader AI ecosystem.
UU PDP: Indonesia's Personal Data Protection Law
The Undang-Undang Pelindungan Data Pribadi (UU PDP), enacted in 2022, is Indonesia's comprehensive personal data protection law. It establishes rules for the collection, processing, storage and sharing of personal data, with significant penalties for non-compliance. For any company using AI tools, UU PDP compliance is a fundamental governance requirement.
AI tools process data — sometimes including personal data — to generate their outputs. When employees use AI tools to analyse customer information, draft personalised communications, or process documents containing personal details, they may be creating data protection obligations for their organisation.
Key UU PDP Provisions Relevant to AI
Consent and purpose limitation. Personal data should be collected and processed with appropriate consent and for specified purposes. If AI tools are used to process personal data, the organisation must ensure that this use falls within the purposes for which consent was obtained.
Data minimisation. Organisations should only process personal data that is necessary for the specified purpose. This principle has direct implications for AI use — employees should not feed more personal data into AI tools than is strictly necessary.
Data subject rights. Individuals have rights regarding their personal data, including the right to access, correct, delete and restrict processing. Companies must be able to fulfil these rights even when AI tools have been involved in data processing.
Data transfer restrictions. UU PDP includes provisions on cross-border data transfers. When AI tools process data on servers located outside Indonesia, this may constitute a cross-border transfer that requires appropriate safeguards.
Breach notification. Organisations must notify the relevant authority and affected individuals in the event of a personal data breach. If an AI tool is involved in a data breach — for example, if sensitive data is inadvertently exposed through an AI platform — the notification obligations still apply.
Practical Compliance Steps
For Indonesian companies, UU PDP compliance in the context of AI use involves several practical steps:
- Map your AI data flows. Understand what data your employees are inputting into AI tools and where that data is processed and stored.
- Choose enterprise-grade AI platforms. Use AI tools that offer data protection features such as data encryption, no-training-on-input policies and contractual data processing commitments.
- Train employees on data handling. Ensure that all AI users understand what types of data they may and may not input into AI tools.
- Update privacy notices. If your organisation uses AI to process personal data, ensure that your privacy notices reflect this.
- Document your practices. Maintain records of how AI tools are used in your organisation, including the types of data processed and the safeguards applied.
Building an AI Governance Framework
An effective AI governance framework does not need to be bureaucratic or complex. For most Indonesian companies, a practical governance framework includes the following components:
AI Acceptable Use Policy
An acceptable use policy is the cornerstone of AI governance. It sets clear expectations for how employees may use AI tools in the workplace. A well-drafted policy should cover:
- Approved tools. Which AI platforms and tools are approved for business use? This prevents employees from using consumer-grade tools that may not meet the organisation's security requirements.
- Data handling rules. What types of data may be input into AI tools? Customer personal data, financial data, trade secrets and other sensitive information typically require restrictions.
- Quality assurance. All AI-generated output should be reviewed by a human before being used for business purposes, shared with clients or published externally.
- Disclosure requirements. When, if ever, should the use of AI tools be disclosed to clients, customers or other stakeholders?
- Prohibited uses. Are there any uses of AI that are explicitly prohibited? For example, using AI to make consequential decisions about individuals without human oversight.
Roles and Responsibilities
Clear accountability is essential for effective AI governance. Companies should designate responsibility for AI governance, which may involve:
- AI governance lead. A senior individual responsible for overseeing the organisation's AI policies and practices. In smaller companies, this may be an existing role (such as the CTO, compliance officer or data protection officer) with an expanded mandate.
- Department champions. Individuals within each department who promote responsible AI use, serve as a resource for colleagues and escalate concerns.
- IT and security team. Responsible for evaluating and approving AI tools, managing access controls and monitoring security.
Risk Assessment Process
Before adopting a new AI tool or applying AI to a new use case, companies should conduct a risk assessment. This need not be a lengthy formal process for every minor tool, but should be proportionate to the potential impact. Key questions include:
- What data will the AI tool process?
- Could AI errors cause harm to customers, employees or the business?
- Does the use case involve any regulated activities?
- Are there potential bias or fairness concerns?
- What are the data security and privacy implications?
For higher-risk applications — those involving sensitive personal data, consequential decisions about individuals, or regulated activities — a more detailed assessment is warranted.
Vendor Due Diligence
When selecting AI tools and platforms, companies should evaluate vendors with AI-specific criteria:
- Data handling practices. How does the vendor process, store and protect input data? Does the vendor use input data to train its models?
- Security certifications. Does the vendor hold relevant security certifications (such as SOC 2 or ISO 27001)?
- Contractual protections. Does the vendor offer data processing agreements that align with UU PDP requirements?
- Transparency. Does the vendor provide clear information about how its AI models work, their limitations and any known biases?
Monitoring and Review
AI governance is not a set-and-forget exercise. Companies should regularly review their AI policies and practices to ensure they remain current and effective. This includes:
- Regular policy reviews. Update AI policies at least annually, or more frequently if the regulatory environment changes.
- Usage monitoring. Track how AI tools are being used across the organisation to identify any emerging risks or policy gaps.
- Incident response. Establish a process for handling AI-related incidents, such as data exposure through an AI tool, AI-generated errors that cause harm, or employee misuse.
- Feedback mechanisms. Encourage employees to report concerns, share suggestions and provide feedback on AI governance practices.
Responsible AI Principles for Indonesian Companies
Beyond compliance with specific regulations, Indonesian companies benefit from articulating a set of responsible AI principles that guide their approach. These principles provide a foundation for decision-making when specific policies do not address a particular situation. Common responsible AI principles include:
Fairness
AI tools should be used in ways that are fair and do not discriminate against individuals or groups based on protected characteristics. This is particularly important in applications such as hiring, lending and customer service, where AI-driven bias could cause real harm.
Transparency
Organisations should be transparent about their use of AI, both internally and externally. Employees should know which AI tools are in use and how they work. Customers and stakeholders should be informed when AI plays a significant role in decisions that affect them.
Accountability
There should always be a human accountable for decisions and outputs that involve AI. AI tools can inform and assist, but they should not be the sole decision-maker for consequential matters.
Privacy
AI use should respect the privacy of individuals, in compliance with UU PDP and in accordance with the organisation's broader privacy commitments.
Safety and Security
AI tools should be deployed in a manner that protects the security of the organisation's data and systems. This includes using tools from reputable vendors, applying access controls and monitoring for security incidents.
Implementing AI Governance in Practice
For many Indonesian companies, implementing AI governance may feel daunting. The key is to start simply and build over time. Here is a practical implementation roadmap:
Phase 1: Foundation (Month 1-2)
- Draft an AI acceptable use policy based on the organisation's existing technology use policies.
- Identify an AI governance lead and communicate their role to the organisation.
- Conduct a basic inventory of AI tools currently in use across the organisation.
- Brief senior leadership on the governance framework and obtain their endorsement.
Phase 2: Education (Month 2-4)
- Deliver AI training that includes governance and responsible use content.
- Share the acceptable use policy with all employees and provide guidance on compliance.
- Establish a channel for employees to ask questions and report concerns about AI use.
Phase 3: Maturation (Month 4-6)
- Conduct risk assessments for higher-risk AI use cases.
- Implement vendor due diligence processes for new AI tool procurement.
- Begin monitoring AI usage patterns and policy compliance.
- Gather feedback and refine policies based on practical experience.
Phase 4: Ongoing (Month 6+)
- Conduct regular policy reviews and updates.
- Stay current with changes in Indonesian regulation, including any new guidance related to AI.
- Share learnings and best practices within the industry.
- Expand governance to cover new AI tools and use cases as they are adopted.
The Indonesian Regulatory Outlook
Indonesia's regulatory framework for AI is evolving. While the Stranas KA and UU PDP provide the current foundation, additional regulation specific to AI is likely in the coming years. Companies that build robust governance frameworks now will find it easier to adapt to new requirements as they emerge.
Areas to watch include potential sector-specific AI regulations (for example, in financial services or healthcare), guidance from OJK and other regulators on AI use in regulated industries, and developments in regional frameworks such as ASEAN's approach to AI governance.
Governance as a Competitive Advantage
It may seem counterintuitive, but strong AI governance can actually accelerate AI adoption rather than slow it down. When employees have clear guidelines for AI use, they are more confident in experimenting with new tools. When clients and stakeholders know that an organisation has robust AI governance, they are more willing to trust AI-driven services and recommendations.
For Indonesian companies, AI governance is not a burden — it is a foundation for confident, responsible and sustainable AI adoption. The investment in developing a governance framework today will pay dividends in trust, compliance and competitive positioning for years to come.
Frequently Asked Questions
Indonesia does not yet have standalone AI-specific legislation, but the National AI Strategy (Stranas KA) outlines guiding principles, and the Personal Data Protection Law (UU PDP) applies directly to how AI tools process personal data. Companies should build governance frameworks that comply with UU PDP and align with the principles of the Stranas KA.
An effective policy should cover approved AI tools, data handling rules specifying what information may be input into AI tools, quality assurance requirements for human review of AI outputs, disclosure guidelines and any explicitly prohibited uses. The policy should be practical, clearly written and accessible to all employees.
UU PDP requires companies to handle personal data responsibly, including when that data is processed by AI tools. Key implications include restrictions on inputting personal data into AI platforms, requirements for data processing agreements with AI vendors and obligations around data breach notification and data subject rights.
A practical AI governance framework can be established in phases over three to six months. The first phase involves drafting policies and assigning responsibilities, the second phase focuses on education and communication, and subsequent phases involve risk assessment, monitoring and continuous improvement.
