
As artificial intelligence tools become more widely available and more capable, Indonesian companies face an important question: how do we use AI responsibly? This is not merely an academic concern. Companies that deploy AI tools without clear policies risk data breaches, regulatory violations, reputational damage and erosion of stakeholder trust. Conversely, companies that establish thoughtful AI governance frameworks can use AI confidently, knowing that appropriate safeguards are in place.
AI governance refers to the policies, processes and structures that guide how an organisation adopts and uses artificial intelligence. It encompasses everything from acceptable use policies for individual employees to enterprise-level risk assessment frameworks. For Indonesian companies, AI governance also involves understanding and complying with the country's evolving regulatory landscape, including the National AI Strategy and the Personal Data Protection Law.
This guide provides a practical framework for Indonesian companies looking to establish AI governance. It is written for business leaders, compliance professionals, IT managers and anyone responsible for guiding their organisation's approach to AI adoption.
Indonesia's Strategi Nasional Kecerdasan Artifisial (Stranas KA), or National Artificial Intelligence Strategy, provides the government's vision for AI development and adoption across the country. Understanding this strategy is valuable context for any company developing its own AI governance framework.
The Stranas KA outlines several key priorities, including developing AI talent, supporting AI research and innovation, promoting ethical AI use and leveraging AI for public benefit. The strategy recognises that AI adoption must be balanced with considerations around privacy, fairness, transparency and accountability.
For companies, the Stranas KA signals the government's supportive stance towards AI adoption while also indicating the direction of future regulation. Companies that align their AI governance frameworks with the principles outlined in the national strategy will be well positioned as the regulatory environment evolves.
Key takeaways from the Stranas KA for corporate governance include:
The Undang-Undang Pelindungan Data Pribadi (UU PDP), enacted in 2022, is Indonesia's comprehensive personal data protection law. It establishes rules for the collection, processing, storage and sharing of personal data, with significant penalties for non-compliance. For any company using AI tools, UU PDP compliance is a fundamental governance requirement.
AI tools process data — sometimes including personal data — to generate their outputs. When employees use AI tools to analyse customer information, draft personalised communications, or process documents containing personal details, they may be creating data protection obligations for their organisation.
Consent and purpose limitation. Personal data should be collected and processed with appropriate consent and for specified purposes. If AI tools are used to process personal data, the organisation must ensure that this use falls within the purposes for which consent was obtained.
Data minimisation. Organisations should only process personal data that is necessary for the specified purpose. This principle has direct implications for AI use — employees should not feed more personal data into AI tools than is strictly necessary.
Data subject rights. Individuals have rights regarding their personal data, including the right to access, correct, delete and restrict processing. Companies must be able to fulfil these rights even when AI tools have been involved in data processing.
Data transfer restrictions. UU PDP includes provisions on cross-border data transfers. When AI tools process data on servers located outside Indonesia, this may constitute a cross-border transfer that requires appropriate safeguards.
Breach notification. Organisations must notify the relevant authority and affected individuals in the event of a personal data breach. If an AI tool is involved in a data breach — for example, if sensitive data is inadvertently exposed through an AI platform — the notification obligations still apply.
For Indonesian companies, UU PDP compliance in the context of AI use involves several practical steps:
An effective AI governance framework does not need to be bureaucratic or complex. For most Indonesian companies, a practical governance framework includes the following components:
An acceptable use policy is the cornerstone of AI governance. It sets clear expectations for how employees may use AI tools in the workplace. A well-drafted policy should cover:
Clear accountability is essential for effective AI governance. Companies should designate responsibility for AI governance, which may involve:
Before adopting a new AI tool or applying AI to a new use case, companies should conduct a risk assessment. This need not be a lengthy formal process for every minor tool, but should be proportionate to the potential impact. Key questions include:
For higher-risk applications — those involving sensitive personal data, consequential decisions about individuals, or regulated activities — a more detailed assessment is warranted.
When selecting AI tools and platforms, companies should evaluate vendors with AI-specific criteria:
AI governance is not a set-and-forget exercise. Companies should regularly review their AI policies and practices to ensure they remain current and effective. This includes:
Beyond compliance with specific regulations, Indonesian companies benefit from articulating a set of responsible AI principles that guide their approach. These principles provide a foundation for decision-making when specific policies do not address a particular situation. Common responsible AI principles include:
AI tools should be used in ways that are fair and do not discriminate against individuals or groups based on protected characteristics. This is particularly important in applications such as hiring, lending and customer service, where AI-driven bias could cause real harm.
Organisations should be transparent about their use of AI, both internally and externally. Employees should know which AI tools are in use and how they work. Customers and stakeholders should be informed when AI plays a significant role in decisions that affect them.
There should always be a human accountable for decisions and outputs that involve AI. AI tools can inform and assist, but they should not be the sole decision-maker for consequential matters.
AI use should respect the privacy of individuals, in compliance with UU PDP and in accordance with the organisation's broader privacy commitments.
AI tools should be deployed in a manner that protects the security of the organisation's data and systems. This includes using tools from reputable vendors, applying access controls and monitoring for security incidents.
For many Indonesian companies, implementing AI governance may feel daunting. The key is to start simply and build over time. Here is a practical implementation roadmap:
Indonesia's regulatory framework for AI is evolving. While the Stranas KA and UU PDP provide the current foundation, additional regulation specific to AI is likely in the coming years. Companies that build robust governance frameworks now will find it easier to adapt to new requirements as they emerge.
Areas to watch include potential sector-specific AI regulations (for example, in financial services or healthcare), guidance from OJK and other regulators on AI use in regulated industries, and developments in regional frameworks such as ASEAN's approach to AI governance.
It may seem counterintuitive, but strong AI governance can actually accelerate AI adoption rather than slow it down. When employees have clear guidelines for AI use, they are more confident in experimenting with new tools. When clients and stakeholders know that an organisation has robust AI governance, they are more willing to trust AI-driven services and recommendations.
For Indonesian companies, AI governance is not a burden — it is a foundation for confident, responsible and sustainable AI adoption. The investment in developing a governance framework today will pay dividends in trust, compliance and competitive positioning for years to come.
Indonesia does not yet have standalone AI-specific legislation, but the National AI Strategy (Stranas KA) outlines guiding principles, and the Personal Data Protection Law (UU PDP) applies directly to how AI tools process personal data. Companies should build governance frameworks that comply with UU PDP and align with the principles of the Stranas KA.
An effective policy should cover approved AI tools, data handling rules specifying what information may be input into AI tools, quality assurance requirements for human review of AI outputs, disclosure guidelines and any explicitly prohibited uses. The policy should be practical, clearly written and accessible to all employees.
UU PDP requires companies to handle personal data responsibly, including when that data is processed by AI tools. Key implications include restrictions on inputting personal data into AI platforms, requirements for data processing agreements with AI vendors and obligations around data breach notification and data subject rights.
A practical AI governance framework can be established in phases over three to six months. The first phase involves drafting policies and assigning responsibilities, the second phase focuses on education and communication, and subsequent phases involve risk assessment, monitoring and continuous improvement.