What is Right to Explanation?
The Right to Explanation is a legal and ethical concept that gives individuals the right to receive a meaningful explanation of how an AI or automated system arrived at a decision that significantly affects them, enabling them to understand, challenge, and seek redress for those decisions.
What is the Right to Explanation?
The Right to Explanation is the principle that individuals have the right to receive a clear, understandable explanation when an automated system or AI makes a decision that significantly affects them. This right recognises that as organisations increasingly rely on algorithms to make consequential decisions, such as approving loans, screening job applicants, setting insurance premiums, or determining eligibility for services, the people affected by those decisions deserve to understand how and why they were made.
The concept gained legal prominence through the European Union's General Data Protection Regulation (GDPR), which includes provisions around automated decision-making and profiling. Article 22 of the GDPR gives individuals the right not to be subject to decisions based solely on automated processing that significantly affects them, and Articles 13-15 require organisations to provide meaningful information about the logic involved in such decisions.
Why the Right to Explanation Matters
Fundamental Fairness
When a human decision-maker denies your loan application, you can ask why and receive an answer. When an algorithm makes that same decision, you deserve the same opportunity. The right to explanation ensures that the shift from human to automated decision-making does not eliminate individuals' ability to understand and challenge the decisions that affect their lives.
Accountability
Explanation requirements force organisations to understand their own AI systems. If you cannot explain a decision, you cannot be accountable for it. The right to explanation creates an incentive for organisations to build interpretable systems, maintain documentation, and invest in explainability capabilities.
Error Detection
When individuals can understand AI decisions, they can identify errors. A loan applicant who receives an explanation may recognise that the system relied on incorrect data. A job candidate may identify that the algorithm weighed irrelevant factors. These corrections benefit both the individual and the organisation by improving decision quality.
Trust Building
Explanations build trust between organisations and the people they serve. Research consistently shows that people are more willing to accept automated decisions, even unfavourable ones, when they receive a clear explanation of the reasoning. In customer-facing applications, explanation capabilities can be a competitive differentiator.
Types of Explanations
Global Explanations
Global explanations describe how an AI model works in general. They explain the overall logic, the features the model considers most important, and the general patterns it has learned. Global explanations help stakeholders understand the model's decision-making approach but do not address specific individual decisions.
Local Explanations
Local explanations address specific decisions. They explain why a particular individual received a particular outcome, identifying the factors that were most influential in that specific case. Local explanations are what individuals typically want when they ask why an AI system made a decision about them.
Contrastive Explanations
Contrastive explanations answer the question of what would have needed to be different for the outcome to change. For example, explaining that a loan application was denied but would have been approved if the applicant's debt-to-income ratio were below a certain threshold. These explanations are often the most actionable because they tell individuals what they can do to achieve a different outcome.
Legal Landscape
European Union
The GDPR provides the strongest legal framework for the right to explanation. While legal scholars debate the exact scope of the right under GDPR, the regulation clearly requires meaningful information about the logic of automated decisions, the significance of those decisions, and their envisaged consequences.
Southeast Asia
Southeast Asian countries are developing their own frameworks. Singapore's PDPA does not include an explicit right to explanation for AI decisions, but the Model AI Governance Framework emphasises transparency and explainability as governance principles. Thailand's PDPA includes provisions around profiling and automated decisions that parallel some GDPR concepts. Indonesia's PDP Law addresses individual rights regarding automated decision-making.
The ASEAN Guide on AI Governance and Ethics recommends that organisations provide explanations of AI-driven decisions, particularly in high-stakes contexts. While these are currently recommendations rather than legal requirements, the trend toward enforceable explanation rights is clear.
Global Trend
Beyond the EU and ASEAN, similar requirements are emerging worldwide. Brazil's LGPD, India's Digital Personal Data Protection Act, and various sector-specific regulations in financial services and healthcare all include elements of explanation rights. For multinational companies, the right to explanation is becoming a global operating requirement.
Implementing the Right to Explanation
Design for Explainability
Build explanation capabilities into AI systems from the start rather than trying to add them after deployment. This means selecting models that support explainability, designing data pipelines that preserve the information needed for explanations, and building user interfaces that can present explanations clearly.
Choose Appropriate Explanation Methods
Different situations call for different types of explanations. A customer who receives an unfavourable decision needs a clear, non-technical explanation of the key factors. A regulator conducting an audit may need detailed technical information about the model's logic. Design explanation systems that can serve multiple audiences.
Balance Transparency and Complexity
Not every technical detail needs to be shared. The goal is meaningful information that enables understanding, not a comprehensive technical briefing. Focus explanations on the factors that were most influential in the specific decision and present them in language the recipient can understand.
Maintain Explanation Records
Document the explanations provided for significant AI decisions. This creates an audit trail that demonstrates compliance with explanation requirements and enables your organisation to respond to challenges or regulatory inquiries.
Train Customer-Facing Teams
The people who interact with customers need to understand AI systems well enough to provide or supplement automated explanations. Invest in training for customer service, sales, and support teams so they can address questions about AI-driven decisions competently.
Challenges and Considerations
Technical Complexity
Some AI models, particularly deep neural networks, are inherently difficult to explain. The field of explainable AI (XAI) has developed techniques such as SHAP values, LIME, and attention mechanisms to provide explanations, but these are approximations and have their own limitations.
Trade-offs Between Accuracy and Explainability
In some cases, more explainable models are less accurate than complex models. Organisations must decide whether the accuracy gain from a black-box model justifies the reduction in explainability, particularly for high-stakes decisions.
Gaming and Manipulation
Detailed explanations could enable individuals to manipulate AI systems by adjusting their inputs to trigger favourable outcomes without genuinely meeting the criteria. Organisations must balance transparency with system integrity.
The Right to Explanation is a growing legal and customer expectation that directly affects how your organisation deploys AI. When your AI systems make decisions about customers, employees, or partners, those individuals increasingly have both the legal right and the practical expectation of understanding why. Failing to provide explanations exposes your organisation to regulatory action, legal challenges, and customer attrition.
For CEOs, the right to explanation affects customer trust and regulatory compliance across every market you operate in. The EU's GDPR already requires explanations for automated decisions affecting EU residents, and Southeast Asian regulations are moving in the same direction. For CTOs, explanation capabilities must be designed into AI systems from the start, not retrofitted after deployment.
The business benefit extends beyond compliance. Organisations that explain their AI decisions well experience higher customer satisfaction, fewer complaints, and stronger relationships. In competitive Southeast Asian markets, the ability to explain AI decisions clearly can differentiate your organisation from competitors who treat AI as an opaque black box.
- Design explanation capabilities into AI systems from the outset rather than attempting to add them after deployment.
- Provide different levels of explanation for different audiences: clear and non-technical for customers, detailed and technical for regulators and auditors.
- Focus explanations on the factors most influential in the specific decision, presented in language the recipient can understand.
- Train customer-facing teams to explain AI-driven decisions competently, as automated explanations may not always be sufficient.
- Maintain records of explanations provided for significant AI decisions to support compliance and audit requirements.
- Monitor the evolving legal landscape across Southeast Asian markets, as explanation requirements are expected to become more explicit.
- Balance transparency with system security, providing meaningful explanations without exposing vulnerabilities to manipulation.
Frequently Asked Questions
Does GDPR require AI systems to explain their decisions?
GDPR does not use the exact phrase "right to explanation," but it requires organisations to provide meaningful information about the logic involved in automated decisions that significantly affect individuals, as well as the significance and envisaged consequences of such processing. Articles 13, 14, and 15 require this information to be provided proactively or upon request. Article 22 gives individuals the right not to be subject to solely automated decisions with legal or similarly significant effects. Collectively, these provisions create a de facto right to explanation for consequential automated decisions.
How detailed must an AI explanation be?
Explanations should be meaningful and understandable to the recipient, but they do not need to reveal every technical detail of the model. For customers, a good explanation identifies the key factors that influenced the decision and, where possible, what would need to change for a different outcome. For regulators, more detailed information about the model logic and testing may be required. The standard is that the explanation should enable the individual to understand the decision and, if appropriate, challenge it meaningfully.
More Questions
If your AI system makes consequential decisions about people and cannot provide meaningful explanations, you face several risks. In jurisdictions with explanation requirements, such as the EU, you may be in violation of the law. Even in jurisdictions without explicit requirements, inability to explain decisions undermines accountability and makes it difficult to defend those decisions if challenged. Practically, you should either invest in explainability tools for your existing models, switch to more interpretable models for high-stakes decisions, or implement human review processes that can provide explanations when the automated system cannot.
Need help implementing Right to Explanation?
Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how right to explanation fits into your AI roadmap.