Back to Insights
AI Governance & Risk ManagementPoint of View

Explainable AI: Industry Perspective

3 min readPertama Partners
Updated February 21, 2026
For:CEO/FounderCTO/CIOCFOCHRO

Comprehensive pov for explainable ai covering strategy, implementation, and optimization across Southeast Asian markets.

Summarize and fact-check this article with:

Key Takeaways

  • 1.79% of enterprise AI adopters rank explainability as a top-three deployment requirement, up from 48% in 2021.
  • 2.Explainable models perform within 2-3% accuracy of black-box counterparts, challenging the accuracy-explainability trade-off assumption.
  • 3.67% of financial institutions have deployed or are developing XAI capabilities for credit, AML, and fraud detection.
  • 4.87% of radiologists report higher trust in AI recommendations accompanied by visual explanations.
  • 5.Global spending on AI governance and explainability tools is projected to reach $2.1 billion by 2026, growing at 34% annually.

As artificial intelligence systems assume greater responsibility for consequential decisions, approving loans, diagnosing diseases, flagging fraud, the demand for explainability has moved from academic interest to regulatory mandate. Explainable AI (XAI) refers to techniques and processes that make AI outputs understandable to human stakeholders without sacrificing meaningful accuracy. According to a 2024 Deloitte survey, 79% of enterprise AI adopters now rank explainability as a top-three deployment requirement, up from 48% in 2021.

The Regulatory Catalyst

Regulation is the primary driver accelerating XAI adoption across industries. The EU AI Act, which entered force in August 2024, requires that high-risk AI systems provide outputs that are "interpretable by the deployer" and include sufficient documentation for users to understand system behavior. Article 13 specifically mandates transparency in a manner appropriate to the intended purpose of the system.

In the United States, the regulatory approach is sector-specific but equally demanding. The Federal Reserve's SR 11-7 guidance on model risk management, originally designed for statistical models, now applies to machine learning systems in banking. The FDA's 2024 guidance on AI/ML-based Software as a Medical Device requires manufacturers to describe the model's decision-making logic. The SEC has proposed rules requiring broker-dealers to evaluate and disclose predictive analytics and AI used in investor interactions.

Globally, the trend is consistent. Singapore's Model AI Governance Framework, updated in 2024, emphasizes explainability as a core principle. Brazil's AI regulation bill, advancing through Congress, includes transparency requirements for automated decision systems. China's Interim Measures for the Management of Generative AI Services mandate that providers be able to explain algorithmic mechanisms to regulators upon request.

Financial Services: Where Stakes Meet Scale

Financial services leads XAI adoption out of necessity. The industry manages trillions in assets using models that must satisfy regulators, auditors, and customers simultaneously. A 2024 McKinsey analysis found that 67% of financial institutions have deployed or are actively developing XAI capabilities for credit decisions, anti-money laundering, and fraud detection.

Credit scoring illustrates the challenge. Traditional logistic regression models are inherently interpretable, each variable's contribution to the score is transparent. Machine learning models like gradient-boosted trees or neural networks can improve predictive accuracy by 15-25% according to research published in the Journal of Financial Economics, but their decision logic is opaque. Techniques like SHAP (SHapley Additive exPlanations) values have become industry standard for decomposing individual predictions into feature-level contributions.

Anti-money laundering (AML) presents a different explainability challenge. False positive rates in traditional AML systems exceed 95% according to a 2024 ACAMS survey, meaning investigators spend the vast majority of their time reviewing legitimate transactions. AI models that reduce false positives must explain why specific transactions were flagged and, critically, why others were not, to satisfy examiner expectations and legal requirements under the Bank Secrecy Act.

Healthcare: Explainability as Patient Safety

In healthcare, explainability is inseparable from patient safety and clinical trust. The American Medical Association's 2024 policy on Augmented Intelligence states that AI systems used in clinical settings must provide explanations "sufficient for clinicians to exercise independent medical judgment." Physicians will not, and should not, follow algorithmic recommendations they cannot evaluate.

Diagnostic imaging offers a mature use case. AI models that detect cancerous lesions in radiology images now achieve sensitivity rates above 94% for certain cancer types, according to a 2024 meta-analysis in The Lancet Digital Health. However, adoption depends on visual explainability, heat maps or attention maps that show which regions of an image triggered the model's assessment. A 2024 survey in JAMA Network Open found that 87% of radiologists reported higher trust in AI recommendations accompanied by visual explanations.

Drug discovery represents an emerging frontier. AI models that predict molecular interactions or identify drug candidates process vast chemical spaces. Explainability in this context means understanding which molecular features drive predictions, enabling chemists to apply domain knowledge and avoid pursuing candidates that are computationally promising but biochemically impractical. Pfizer reported in 2024 that XAI tools reduced their drug candidate validation cycle by 20%.

Manufacturing and Supply Chain: Operational Explainability

Manufacturing environments require explainability for operational rather than regulatory reasons, though regulation is catching up. When an AI system recommends shutting down a production line for preventive maintenance, plant managers need to understand the rationale to make informed trade-offs between maintenance costs and production losses.

Predictive maintenance models in manufacturing achieve cost savings of 10-30% according to a 2024 Deloitte analysis, but their value depends on operator trust. Siemens reported that their explainable predictive maintenance platform, which provides component-level degradation explanations, achieved 40% higher adoption rates among plant operators compared with black-box alternatives.

Supply chain optimization adds complexity. Models that recommend inventory levels, routing decisions, or supplier diversification strategies incorporate hundreds of variables across geography, weather, geopolitics, and demand signals. Gartner's 2024 Supply Chain Technology report found that 54% of supply chain leaders require explainability features in AI procurement specifications, up from 22% in 2021.

Technical Approaches to Explainability

The XAI field has matured beyond post-hoc interpretation into purpose-built explainable architectures. Three approaches dominate industry practice:

Inherently interpretable models sacrifice some accuracy for transparency. Decision trees, linear models, and rule-based systems remain appropriate where regulatory requirements prioritize interpretability over marginal accuracy gains. The Consumer Financial Protection Bureau has indicated a preference for interpretable models in fair lending contexts.

Post-hoc explanation methods apply to any model after training. SHAP values, LIME (Local Interpretable Model-agnostic Explanations), and counterfactual explanations are the most widely deployed. A 2024 survey by O'Reilly found that 71% of ML practitioners use SHAP values as their primary explanation method.

Attention and concept-based explanations are emerging for deep learning and large language models. These techniques identify which input features or learned concepts most influenced a specific output. Google's Concept Activation Vectors (TCAV) and attention visualization in transformer models represent the current state of the art for complex architectures.

The Accuracy-Explainability Trade-Off

A persistent concern is that explainability comes at the cost of accuracy. Recent research challenges this assumption. A 2024 study in Nature Machine Intelligence demonstrated that explainable models performed within 2-3% of black-box counterparts on standard benchmarks across healthcare, finance, and natural language processing tasks. For many business applications, this marginal difference is negligible compared with the governance, trust, and compliance benefits of explainability.

The trade-off is context-dependent. In fraud detection, where millisecond latency matters and false positives are tolerable, complex models with post-hoc explanations may be appropriate. In clinical diagnosis, where each decision affects a patient and must withstand scrutiny, inherently interpretable models or highly transparent architectures should be the default.

Building an XAI Strategy

Organizations implementing XAI should begin with a stakeholder mapping exercise. Different audiences need different explanations: regulators need model documentation and validation evidence; operators need actionable rationale for specific recommendations; customers need plain-language justifications for decisions that affect them. A one-size-fits-all explanation strategy fails all audiences.

Investment in XAI is accelerating. IDC projected that global spending on AI governance and explainability tools will reach $2.1 billion by 2026, growing at 34% annually. Organizations that build XAI capabilities now create competitive advantages in regulated markets, customer trust, and operational reliability that will compound as regulatory requirements tighten globally.

Neuroscience-Informed Design and Cognitive Ergonomics

Human-machine interface optimization increasingly draws upon neuroscientific research investigating attentional bandwidth limitations, cognitive fatigue trajectories, and decision-quality degradation patterns under information overload conditions. Kahneman's System 1/System 2 dual-process theory illuminates why dashboard designers should present anomaly detection alerts through peripheral visual channels (leveraging preattentive processing) while reserving central interface real estate for deliberative analytical workflows. Fitts's law calculations optimize interactive element sizing and spatial arrangement; Hick's law considerations minimize decision paralysis through progressive disclosure architectures. The Yerkes-Dodson inverted-U arousal curve suggests that moderate notification frequencies maximize operator vigilance, whereas excessive alerting paradoxically diminishes responsiveness through habituation mechanisms. Ethnographic observation studies conducted within control room environments, air traffic management, nuclear facility operations, intensive care monitoring, yield transferable principles for designing mission-critical artificial intelligence interfaces requiring sustained human oversight.

Common Questions

Explainable AI (XAI) refers to techniques that make AI outputs understandable to human stakeholders without sacrificing meaningful accuracy. It matters because 79% of enterprise AI adopters now rank explainability as a top-three deployment requirement, driven by regulatory mandates like the EU AI Act, customer trust expectations, and operational needs for human-AI collaboration.

Financial services leads due to existing model risk management regulations (SR 11-7) and fair lending requirements. Healthcare follows, where the AMA mandates explanations sufficient for independent clinical judgment. Manufacturing and supply chain are growing rapidly, with 54% of supply chain leaders now requiring XAI features in AI procurement specifications.

Recent research challenges this assumption. A 2024 Nature Machine Intelligence study demonstrated that explainable models performed within 2-3% of black-box counterparts across healthcare, finance, and NLP benchmarks. For most business applications, this marginal difference is negligible compared with governance, trust, and compliance benefits.

Three approaches dominate: inherently interpretable models (decision trees, linear models) for regulated contexts; post-hoc methods like SHAP values and LIME, used by 71% of ML practitioners; and attention/concept-based explanations for deep learning. SHAP values are the most widely deployed method for decomposing individual predictions into feature-level contributions.

The EU AI Act requires high-risk AI systems to provide outputs that are 'interpretable by the deployer' with sufficient documentation for users to understand system behavior. Article 13 specifically mandates transparency appropriate to the system's purpose. This applies to AI used in credit scoring, hiring, healthcare, law enforcement, and critical infrastructure.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  4. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  5. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT). Monetary Authority of Singapore (2018). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.