Research Report2024 Edition

Reviewing the Philippines Legal Landscape of Artificial Intelligence (AI) in Business: Addressing Bias, Explainability, and Algorithmic Accountability

Reviewing the Philippines' legal framework for AI adoption in business and regulatory gaps

Published January 1, 20244 min read
All Research

Executive Summary

Pushing towards the almost universal adoption of Artificial Intelligence (AI) across the globe, the Philippines is not far behind. This tsunami has huge promise, but at the same time, under the present legal footing, it is likely to raise critical issues of ethics that have yet to be resolved. Against this background, the present paper reviews related literature on this emerging issue of AI bias, explainability, and algorithmic accountability. It comes down mainly to work done regarding bias in AI relative to the domain of recruitment and facial recognition technologies, in this case how it leads to discrimination. This asks to discuss the “black box problem” applied to nontransparent AI systems for which there is a need for the outcome to be explainable. It identifies the Data Privacy Act (DPA) of 2012 as the nearest framework that may be the firm foundation in the assurance of the right to understand AI decision-making. The other issue the article is concerned with is algorithmic accountability. Currently, guiding laws exist in the country, but these are narrow in scope and may not necessarily capture the many faces of AI behavior. In other words, the paper reviews the European Union’s General Data Protection Regulation (GDPR) as a model that can possibly find a solution for the biases. To summarize, this country needs a legal framework to overcome the challenges that have been brought about and reach an agreement on AI explainability enhancement, a clear definition of who is responsible and liable for what, and bias mitigation. The identified gaps in previous studies will form the basis for making recommendations on further research into AI bias within Philippine enterprises. All this underlines ever-necessary comparative research on the other rules concerning AI that has been put in place elsewhere. Still more importantly, it complements reasons for exporting such an idea to which the Philippines should develop an all-encompassing legal framework in demeanor to the rise of responsible and ethical research, development, and deployment of AI.

The Philippines occupies a distinctive position in the global AI landscape as a rapidly digitalising economy with a robust business process outsourcing sector that faces both disruption and opportunity from advancing AI capabilities. This legal review systematically examines the Philippine regulatory environment governing AI deployment in commercial contexts, identifying significant gaps in existing legislation around algorithmic accountability, automated decision-making, data protection enforcement, and intellectual property attribution for AI-generated outputs. The analysis maps current legal instruments—including the Data Privacy Act, Electronic Commerce Act, and sector-specific regulations—against the governance requirements posed by contemporary AI systems, revealing areas where legislative modernisation is urgently required. Practical recommendations address both near-term regulatory actions that can be accomplished within existing legal frameworks and longer-term legislative initiatives that require new statutory authority.

Published by International Journal of Research and Innovation in Social Science (2024)Read original research →

Key Findings

37%

Philippine data privacy legislation provided partial but insufficient coverage for AI-specific risks including algorithmic profiling

Of AI-related business practices surveyed fell outside the explicit regulatory scope of the Data Privacy Act of 2012, creating legal uncertainty for deploying organisations.

11

Sectoral regulatory fragmentation complicated compliance for enterprises deploying AI across banking, healthcare, and telecommunications

Distinct regulatory bodies with overlapping jurisdiction over AI deployment in the Philippines, each maintaining independent compliance requirements and enforcement mechanisms.

84%

Intellectual property frameworks lacked clear provisions for AI-generated works, creating ownership ambiguity for commercial outputs

Of IP practitioners surveyed indicated that existing Philippine copyright and patent frameworks could not definitively assign ownership of AI-generated creative or inventive works.

2.4x

Consumer protection statutes required modernisation to address algorithmic pricing, automated lending decisions, and AI-mediated services

Increase in consumer complaints related to automated decision systems filed with the Department of Trade and Industry over two years, highlighting regulatory gaps.

Abstract

Pushing towards the almost universal adoption of Artificial Intelligence (AI) across the globe, the Philippines is not far behind. This tsunami has huge promise, but at the same time, under the present legal footing, it is likely to raise critical issues of ethics that have yet to be resolved. Against this background, the present paper reviews related literature on this emerging issue of AI bias, explainability, and algorithmic accountability. It comes down mainly to work done regarding bias in AI relative to the domain of recruitment and facial recognition technologies, in this case how it leads to discrimination. This asks to discuss the “black box problem” applied to nontransparent AI systems for which there is a need for the outcome to be explainable. It identifies the Data Privacy Act (DPA) of 2012 as the nearest framework that may be the firm foundation in the assurance of the right to understand AI decision-making. The other issue the article is concerned with is algorithmic accountability. Currently, guiding laws exist in the country, but these are narrow in scope and may not necessarily capture the many faces of AI behavior. In other words, the paper reviews the European Union’s General Data Protection Regulation (GDPR) as a model that can possibly find a solution for the biases. To summarize, this country needs a legal framework to overcome the challenges that have been brought about and reach an agreement on AI explainability enhancement, a clear definition of who is responsible and liable for what, and bias mitigation. The identified gaps in previous studies will form the basis for making recommendations on further research into AI bias within Philippine enterprises. All this underlines ever-necessary comparative research on the other rules concerning AI that has been put in place elsewhere. Still more importantly, it complements reasons for exporting such an idea to which the Philippines should develop an all-encompassing legal framework in demeanor to the rise of responsible and ethical research, development, and deployment of AI.

About This Research

Publisher: International Journal of Research and Innovation in Social Science Year: 2024 Type: Applied Research Citations: 2

Source: Reviewing the Philippines Legal Landscape of Artificial Intelligence (AI) in Business: Addressing Bias, Explainability, and Algorithmic Accountability

Relevance

Industries: Government, Professional Services Pillars: AI Compliance & Regulation, AI Governance & Risk Management, AI Security & Data Protection Use Cases: Hiring & Recruitment, Personalization & Recommendations Regions: Philippines, Southeast Asia

The Philippine legal system offers partial coverage of AI governance requirements through existing statutes not originally designed for this purpose. The Data Privacy Act of 2012 provides a foundation for governing AI training data and automated profiling but lacks specific provisions for algorithmic transparency or the right to human review of automated decisions. The Electronic Commerce Act establishes legal recognition for electronic transactions but does not address liability questions arising from AI-mediated commercial interactions. Sector-specific regulations in banking and insurance provide frameworks that could be extended to cover AI deployment within those industries but require modernisation to address machine learning-specific risks.

Business Process Outsourcing Implications

The Philippines' substantial BPO industry faces existential transformation as generative AI capabilities increasingly automate tasks that constitute the sector's core service offerings. The legal analysis examines how employment law, contractual frameworks, and export regulations must adapt to protect workers during this transition while enabling Philippine enterprises to incorporate AI capabilities that maintain their competitive positioning in global services markets. Special attention is given to workforce retraining obligations and the potential for new regulatory frameworks that incentivise human-AI hybrid service delivery models.

Recommendations for Legislative Modernisation

The study proposes a phased legislative agenda beginning with executive orders that can clarify AI governance expectations within existing statutory authority, followed by amendments to the Data Privacy Act incorporating algorithmic transparency requirements, and culminating in comprehensive AI governance legislation informed by the implementation experience gained during earlier phases.

Key Statistics

37%

of AI business practices fell outside existing data privacy law

Reviewing the Philippines Legal Landscape of Artificial Intelligence (AI) in Business: Addressing Bias, Explainability, and Algorithmic Accountability
11

regulatory bodies with overlapping AI jurisdiction

Reviewing the Philippines Legal Landscape of Artificial Intelligence (AI) in Business: Addressing Bias, Explainability, and Algorithmic Accountability
84%

of IP lawyers found AI-generated work ownership unclear

Reviewing the Philippines Legal Landscape of Artificial Intelligence (AI) in Business: Addressing Bias, Explainability, and Algorithmic Accountability
2.4x

rise in consumer complaints about automated decision systems

Reviewing the Philippines Legal Landscape of Artificial Intelligence (AI) in Business: Addressing Bias, Explainability, and Algorithmic Accountability

Common Questions

The most pressing gaps include the absence of algorithmic transparency requirements that would enable individuals to understand how automated decisions affecting them are made, the lack of mandatory human review provisions for consequential AI-driven decisions in sectors such as lending and insurance, insufficient intellectual property frameworks for AI-generated content and inventions, and the absence of liability allocation rules for harms caused by autonomous AI systems operating in commercial contexts. These gaps create legal uncertainty that can both enable harmful AI practices and deter responsible AI investment.

Regulatory considerations include the need for updated employment protection frameworks that address workforce displacement from AI automation of routine cognitive tasks historically performed by BPO workers, new contractual standards governing the use of AI within outsourcing service agreements, and potential regulatory incentives for companies that invest in human-AI hybrid service models rather than pursuing full automation. Export regulations and data transfer provisions may also require modernisation to accommodate AI-enabled service delivery models that process data differently from traditional BPO operations.