Back to AI Glossary
AI Governance & Ethics

What is Automated Decision-Making?

Automated Decision-Making is the use of artificial intelligence and algorithmic systems to make decisions that affect individuals or organisations with limited or no human intervention. These decisions can range from routine operational choices to high-stakes determinations about credit, employment, insurance, and access to services.

What is Automated Decision-Making?

Automated Decision-Making (ADM) refers to the process by which AI systems or algorithms make decisions that affect people or organisations without meaningful human involvement at the point of decision. The system takes in data, applies rules or learned patterns, and produces a decision or recommendation that is acted upon directly.

For business leaders, ADM is both an enormous opportunity and a significant governance challenge. Automated decisions can dramatically improve speed, consistency, and cost-efficiency. But when those decisions affect people's lives, employment, access to credit, insurance claims, or legal outcomes, the stakes are high and the governance requirements are substantial.

The Spectrum of Automation

Not all automated decision-making is the same. It exists on a spectrum:

  • Fully automated: The system makes the decision and executes it without any human review. Examples include automated credit scoring, algorithmic trading, and content moderation at scale.
  • Semi-automated with human review: The system makes a recommendation, and a human reviews it before the decision is finalised. Examples include AI-assisted hiring shortlisting and medical diagnosis support.
  • Human decision with AI support: A human makes the decision but uses AI-generated insights as one input among many. Examples include strategic planning with predictive analytics.

The governance requirements increase as you move toward full automation, especially when the decisions significantly affect individuals.

Why Automated Decision-Making Needs Governance

Automated decisions at scale can create systematic patterns of harm that are difficult to detect without deliberate oversight:

  • Scale of impact: An automated system can make thousands or millions of decisions per day. If there is a flaw in the logic or data, the resulting harm is multiplied by every decision made.
  • Consistency vs. rigidity: While consistency is a benefit of automation, it can also mean that a flawed decision is applied uniformly, affecting entire populations rather than individual cases.
  • Accountability gaps: When no human is directly involved in a decision, it can be unclear who is responsible for the outcome. This creates both legal and ethical challenges.
  • Reduced context sensitivity: Automated systems may not account for individual circumstances that a human decision-maker would consider. This can lead to decisions that are technically correct by the system's criteria but unjust in context.

Key Governance Requirements

Transparency

Individuals affected by automated decisions should know that an automated system is involved, what data it uses, and in general terms how decisions are made. This is both an ethical requirement and increasingly a legal one.

Right to Human Review

For significant decisions, individuals should have the ability to request human review. This serves as a safety net for cases where the automated system's output may be inappropriate or unjust due to unusual circumstances.

Regular Auditing

Automated decision-making systems should be regularly audited for accuracy, fairness, and compliance with applicable laws and organisational policies. Audits should examine both the system's technical performance and its real-world outcomes.

Impact Assessment

Before deploying an automated decision-making system, conduct an impact assessment that evaluates the potential effects on individuals and groups, particularly vulnerable populations. This assessment should inform design choices and governance controls.

Automated Decision-Making in Southeast Asia

Automated decision-making is increasingly common across ASEAN markets in sectors including financial services, insurance, telecommunications, and e-commerce.

Singapore addresses automated decision-making through its Model AI Governance Framework, which recommends human oversight for decisions that significantly affect individuals. The PDPA's provisions on data processing apply to data used in automated decisions, and the PDPC has issued guidance on transparency and accountability for algorithmic decisions.

Thailand's PDPA includes provisions relevant to automated decision-making, including the requirement for lawful bases for data processing and individual rights related to automated decisions affecting legal rights.

Indonesia's Personal Data Protection Act establishes individual rights related to automated processing, including the right to object to decisions based solely on automated processing that produce legal effects or similarly significant impacts.

The Philippines' Data Privacy Act includes provisions on automated processing and profiling, requiring data controllers to implement appropriate safeguards.

For businesses deploying automated decision-making across ASEAN, building governance structures that include transparency, human review options, and regular auditing provides a consistent standard that meets emerging requirements across the region.

Implementation Best Practices

  1. Classify your automated decisions by risk: Not all automated decisions need the same level of governance. Credit decisions affecting individuals need more oversight than product recommendation algorithms.
  2. Build in human review triggers: Define the conditions under which an automated decision should be escalated to a human reviewer, such as edge cases, high-value decisions, or consumer requests.
  3. Maintain decision logs: Keep detailed records of automated decisions, the data used, and the rationale, to support auditing, dispute resolution, and regulatory compliance.
  4. Test for bias regularly: Automated systems can develop biased patterns over time. Regular testing across demographic groups helps catch problems early.
  5. Communicate clearly: Tell customers and stakeholders when automated decision-making is involved and explain how they can request human review if they are dissatisfied with an outcome.
Why It Matters for Business

Automated Decision-Making is one of the primary ways organisations derive business value from AI. It enables speed, scale, and consistency that human decision-making alone cannot achieve. But it also concentrates risk in ways that require deliberate governance.

For business leaders in Southeast Asia, the governance of automated decision-making is becoming a regulatory imperative. Data protection authorities across the region are increasingly focused on how automated decisions affect individuals, and the right to human review is emerging as a common requirement. Organisations that deploy automated decision-making without appropriate governance structures face regulatory exposure, customer backlash, and potential legal challenges.

From a competitive perspective, well-governed automated decision-making builds trust with customers and regulators alike. Customers are more willing to engage with automated processes when they know they have recourse. Regulators are more supportive of AI innovation when they see evidence of responsible governance. Investing in governance for automated decision-making is an investment in the sustainability and scalability of your AI operations.

Key Considerations
  • Classify all automated decisions by their potential impact on individuals and apply governance controls proportional to the risk level.
  • Build human review mechanisms into all high-stakes automated decision-making systems and ensure customers know how to request review.
  • Maintain comprehensive decision logs that record the inputs, outputs, and rationale for automated decisions to support auditing and dispute resolution.
  • Test automated decision-making systems regularly for bias, accuracy, and compliance with applicable regulations across your ASEAN operating markets.
  • Communicate clearly with customers about when automated decision-making is involved and what their rights are regarding those decisions.
  • Review your automated decision-making governance at least quarterly as regulations and best practices evolve rapidly across Southeast Asia.

Common Questions

Are we required to tell customers when decisions are made by AI?

Requirements vary across ASEAN markets, but the trend is clearly toward transparency. Singapore's AI governance guidance recommends disclosure when AI is involved in significant decisions. Indonesia's Personal Data Protection Act includes rights related to automated processing. Thailand's PDPA requires lawful bases for automated data processing. Even where disclosure is not explicitly mandated, it is considered best practice and builds customer trust. Organisations that are transparent about automated decision-making are better positioned for evolving regulatory requirements.

Can customers demand a human review of an automated decision?

In several ASEAN jurisdictions, individuals have rights related to automated decisions that significantly affect them. Indonesia's data protection law provides the right to object to decisions based solely on automated processing. Similar rights are emerging across the region. Even where not legally required, offering human review for significant automated decisions is a governance best practice that reduces complaint escalation, builds customer loyalty, and demonstrates responsible AI use to regulators.

More Questions

The key is risk-based governance. Not every automated decision needs human oversight. Low-risk, high-volume decisions like product recommendations or content categorisation can be fully automated with periodic auditing. High-risk decisions affecting individuals' financial status, employment, or access to services should include human review triggers for edge cases, appeals, and regular quality checks. This tiered approach captures the efficiency benefits of automation while maintaining appropriate oversight where the stakes are highest.

References

  1. NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. Stanford HAI AI Index Report 2025. Stanford Institute for Human-Centered AI (2025). View source
  3. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  4. OECD AI Policy Observatory. Organisation for Economic Co-operation and Development (OECD) (2024). View source
  5. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019). View source
  6. ACM FAccT: Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM) (2024). View source
  7. Partnership on AI — Responsible AI Practices. Partnership on AI (2024). View source
  8. Algorithmic Justice League — Unmasking AI Harms and Biases. Algorithmic Justice League (2024). View source
  9. AI Now Institute — Research on AI Policy and Social Implications. AI Now Institute (NYU) (2024). View source
  10. PAI's Responsible Practices for Synthetic Media. Partnership on AI (2024). View source
Related Terms
AI Governance

AI Governance is the set of policies, frameworks, and organisational structures that guide how artificial intelligence is developed, deployed, and monitored within an organisation. It ensures AI systems operate responsibly, comply with regulations, and align with business values and societal expectations.

Data Privacy

Data Privacy is the practice of handling personal data in a way that respects individuals' rights to control how their information is collected, used, stored, shared, and deleted. It encompasses the legal, technical, and organisational measures that organisations implement to protect personal data and comply with data protection regulations.

AI Governance Framework

An AI Governance Framework is a structured set of policies, processes, roles, and accountability mechanisms that an organization establishes to ensure its artificial intelligence systems are developed, deployed, and managed responsibly, ethically, and in compliance with applicable regulations.

Predictive Analytics

Predictive Analytics is the practice of using historical data, statistical algorithms, and machine learning techniques to forecast future outcomes and trends. It enables organisations to anticipate what is likely to happen next, moving beyond understanding past performance to proactively preparing for future events and opportunities.

AI Bias

AI Bias is the systematic and unfair discrimination in AI system outputs that arises from prejudiced assumptions in training data, algorithm design, or deployment context. It can lead to inequitable treatment of individuals or groups based on characteristics like race, gender, age, or socioeconomic status, creating legal, ethical, and business risks.

Need help implementing Automated Decision-Making?

Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how automated decision-making fits into your AI roadmap.