What is AI Sandbox?
An AI Sandbox is a controlled regulatory environment where organisations can test and experiment with AI systems under the supervision of a regulatory body, allowing innovation to proceed while managing risks and informing the development of appropriate regulations.
What is an AI Sandbox?
An AI Sandbox is a structured environment, typically established by a government regulator, where organisations can develop, test, and deploy AI systems under relaxed regulatory requirements and active supervisory oversight. The concept borrows from financial regulatory sandboxes, which have been successfully used in several countries to support fintech innovation while maintaining consumer protections.
In an AI sandbox, participating organisations can experiment with AI technologies that might not fit neatly within existing regulations, test novel approaches in a controlled setting, and work directly with regulators to develop appropriate standards. In return, they agree to enhanced monitoring, reporting requirements, and defined boundaries for their experiments.
The sandbox model serves a dual purpose. It enables organisations to innovate without the fear of inadvertently violating regulations, and it provides regulators with practical insight into how AI technologies work, what risks they pose, and what regulatory approaches are most effective.
Why AI Sandboxes Matter
Addressing Regulatory Uncertainty
One of the biggest barriers to AI adoption is regulatory uncertainty. Organisations may hesitate to deploy AI systems because they are unsure whether those systems comply with existing regulations or will comply with regulations that have not yet been written. Sandboxes reduce this uncertainty by providing a framework for experimentation with regulatory guidance and protection.
Enabling Innovation
Without sandboxes, the default regulatory approach is often either no regulation (which can lead to harmful deployments) or restrictive regulation (which can stifle innovation). Sandboxes offer a middle path that allows innovation to proceed while maintaining oversight and consumer protection.
Informing Better Regulation
Regulators who observe AI systems operating in sandboxes develop a more nuanced understanding of the technology and its risks. This practical experience leads to more effective, proportionate regulations than those developed purely on theoretical grounds. Sandbox participants provide real-world evidence about what works and what does not.
Building Regulatory Relationships
Sandboxes create constructive relationships between innovators and regulators. Rather than viewing regulation as an obstacle, participating organisations work collaboratively with regulators. This relationship often extends beyond the sandbox period, facilitating ongoing dialogue about AI governance.
How AI Sandboxes Work
Application and Selection
Organisations apply to participate in the sandbox, describing the AI system they want to test, its intended use, and the regulatory questions it raises. Regulators select participants based on the innovation potential of the technology, the significance of the regulatory questions involved, and the applicant's capacity to operate responsibly.
Defined Parameters
Each sandbox experiment operates within defined parameters: what the AI system can and cannot do, who it can affect, how long the experiment runs, and what data must be collected and reported. These boundaries protect individuals while enabling meaningful experimentation.
Active Supervision
Unlike standard regulatory oversight, sandbox supervision is proactive and ongoing. Regulators monitor experiments closely, receive regular reports from participants, and can intervene if problems emerge. This hands-on approach enables faster identification and resolution of issues.
Learning and Reporting
Sandbox experiments generate findings that inform both the participating organisation and the regulator. At the conclusion of the sandbox period, participants report on their results, and regulators publish guidance or policy recommendations based on what they observed.
Graduation or Termination
At the end of the sandbox period, successful experiments may graduate to full regulatory approval, potentially with specific conditions. Experiments that reveal unacceptable risks are terminated, and the findings inform future regulatory requirements.
AI Sandboxes in Southeast Asia
Singapore: A Regional Leader
Singapore has been at the forefront of AI regulatory sandboxes in Southeast Asia. The Monetary Authority of Singapore (MAS) established a fintech regulatory sandbox in 2016 that has been used for AI-powered financial services. In 2022, MAS introduced the Sandbox Express pathway for lower-risk innovations, making it easier for organisations to test AI systems in financial contexts.
Beyond financial services, Singapore's IMDA has supported AI experimentation through programmes that provide guidance and support for organisations deploying AI in various sectors. Singapore's approach emphasises practical collaboration between regulators and industry.
Thailand
Thailand's Securities and Exchange Commission has operated a regulatory sandbox that includes AI-powered investment and advisory services. The Bank of Thailand has also used sandbox approaches to enable AI experimentation in banking.
Malaysia
Bank Negara Malaysia has a financial technology regulatory sandbox that covers AI applications in banking and insurance. The sandbox has been used to test AI-driven credit scoring, fraud detection, and customer service applications.
Indonesia
The Indonesian Financial Services Authority (OJK) has implemented a regulatory sandbox for fintech, including AI applications. Indonesia's approach focuses on financial inclusion, using the sandbox to test AI systems that could expand financial services to underserved populations.
Regional Harmonisation
ASEAN is working toward greater harmonisation of AI governance approaches, and sandbox practices are part of this effort. Cross-border sandbox arrangements could enable organisations to test AI systems that serve multiple ASEAN markets under coordinated regulatory oversight.
Benefits and Limitations
Benefits
Sandboxes reduce time-to-market for AI innovations by providing regulatory clarity. They protect consumers through active supervision. They generate practical knowledge that improves regulation. They build trust between innovators and regulators. For startups and smaller organisations, sandboxes can level the playing field by providing regulatory access that would otherwise be available only to large enterprises.
Limitations
Sandboxes can only accommodate a limited number of participants. The controlled environment may not fully replicate real-world conditions. Results from sandbox experiments may not generalise to broader deployment. There is also a risk of regulatory capture, where close relationships between regulators and sandbox participants could bias regulatory outcomes.
Getting Involved in AI Sandboxes
For organisations considering sandbox participation, the first step is to identify the relevant regulatory body for your industry and market. Review the sandbox application requirements, which typically include a description of your AI system, its intended use, the regulatory questions it raises, and your plan for protecting affected individuals.
Prepare for enhanced reporting and monitoring requirements. Sandbox participation requires transparency with the regulator about your AI system's performance, including problems and failures. The organisations that benefit most from sandboxes are those that approach the process as a genuine learning partnership with the regulator.
AI Sandboxes offer a strategic advantage for organisations navigating the uncertain regulatory landscape surrounding AI. They provide a legal framework for testing innovative AI applications that might otherwise be delayed or abandoned due to regulatory uncertainty. Participation in a sandbox can accelerate time-to-market, reduce compliance risk, and build credibility with regulators.
For CEOs, sandbox participation signals to investors, customers, and partners that your organisation takes regulatory compliance seriously while actively pursuing innovation. It also provides direct access to regulators, which can inform your broader compliance strategy. For CTOs, sandboxes offer a structured way to test AI systems in real-world conditions with regulatory support, reducing the risk of technical decisions that later prove non-compliant.
In Southeast Asia, where AI regulations are developing rapidly across ASEAN markets, sandbox participation provides early insight into regulatory direction. Singapore, Thailand, Malaysia, and Indonesia all operate sandbox programmes, and organisations with sandbox experience will be better positioned as these markets finalise their AI governance frameworks.
- Identify the relevant regulatory sandboxes in your operating markets, particularly those offered by financial regulators in Singapore, Thailand, Malaysia, and Indonesia.
- Evaluate whether your AI use case involves regulatory uncertainty that a sandbox could help resolve.
- Prepare for enhanced transparency and reporting requirements, as sandbox participation demands openness about your AI system's performance including failures.
- Approach sandbox participation as a learning partnership with regulators rather than simply a path to regulatory approval.
- Use sandbox findings to inform your broader AI governance practices, not just the specific system being tested.
- Consider cross-border sandbox opportunities as ASEAN works toward harmonised AI governance approaches.
- Factor in the resource requirements of sandbox participation, including dedicated staff for regulatory reporting and monitoring.
Frequently Asked Questions
How long does an AI sandbox programme typically last?
AI sandbox programmes typically run for six months to two years, depending on the regulatory body and the complexity of the AI system being tested. Some programmes, like Singapore's MAS Sandbox Express, offer shorter timelines for lower-risk innovations. The duration is usually defined at the start of the experiment and can sometimes be extended if additional time is needed to generate meaningful results. Organisations should plan for the full sandbox period plus time for the graduation assessment.
Can any company participate in an AI sandbox?
Sandbox programmes have specific eligibility criteria that vary by regulator. Generally, applicants must demonstrate a genuine innovation that raises regulatory questions not adequately addressed by existing rules, a clear benefit to consumers or the market, the technical and operational capacity to participate responsibly, and adequate resources for the enhanced monitoring and reporting requirements. Both startups and established companies can participate, though some programmes prioritise certain types of organisations or innovations.
More Questions
At the conclusion of a sandbox experiment, the regulator evaluates the results and determines the appropriate next step. Successful experiments may receive full regulatory approval to operate commercially, sometimes with specific conditions or ongoing monitoring requirements. If the experiment reveals significant risks, the regulator may require modifications before granting approval or may decline approval entirely. In all cases, the regulator typically publishes findings or guidance that benefit the broader industry.
Need help implementing AI Sandbox?
Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how ai sandbox fits into your AI roadmap.