Back to Insights
AI Governance & Risk ManagementTool Review

Zero trust: Best Practices

3 min readPertama Partners
Updated February 21, 2026
For:CEO/FounderCTO/CIOConsultantCFOCHRO

Comprehensive tool-review for zero trust covering strategy, implementation, and optimization across Southeast Asian markets.

Summarize and fact-check this article with:

Key Takeaways

  • 1.Organizations with mature zero trust experience 42% lower breach costs ($3.28M vs $5.65M) and 51% lower costs for AI-specific breaches
  • 2.Mutual TLS between AI services reduces unauthorized lateral movement by 94% per Google's BeyondProd framework
  • 3.Micro-segmented AI infrastructure contains breaches 68% faster and reduces blast radius by 83% vs flat networks
  • 4.Behavioral analytics detect 73% of AI-targeted attacks before achieving objectives compared to 31% for rule-based detection
  • 5.AI-specific incident response playbooks resolve incidents 58% faster with 41% less data loss than generic cybersecurity playbooks

The convergence of AI systems and zero trust security has become a board-level priority, with 78% of CISOs reporting that AI-specific security frameworks are their top investment area for 2025, according to Gartner's 2024 CISO Effectiveness Survey. Traditional perimeter-based security models fundamentally cannot protect AI workloads that operate across cloud environments, consume data from diverse sources, and interact with external APIs in real time.

Why AI Systems Demand Zero Trust Architecture

AI systems present unique security challenges that expose the limitations of conventional approaches. Model APIs process sensitive data at massive scale, a single large language model endpoint may handle thousands of requests per minute containing proprietary business data, personal information, and strategic insights. Training pipelines ingest data from dozens of sources, any of which could introduce poisoned or compromised inputs. Inference endpoints are attractive targets for adversarial attacks, model extraction, and data exfiltration.

The National Institute of Standards and Technology (NIST) Special Publication 800-207 defines zero trust as "an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources." For AI systems, this means every interaction, whether a user querying a model, a pipeline accessing training data, or a model calling an external API, must be independently verified, authorized, and logged.

IBM's 2024 Cost of a Data Breach Report found that organizations with mature zero trust implementations experienced breach costs 42% lower than those without ($3.28M versus $5.65M average). For AI-specific breaches, which IBM categorizes separately for the first time, zero trust reduced incident costs by 51% due to faster containment and smaller blast radii.

Identity Verification for AI Workloads

Traditional identity management assumes human users, but AI systems introduce machine-to-machine, model-to-data, and agent-to-service interactions that require fundamentally different identity frameworks.

Every AI component, models, data pipelines, preprocessing services, inference endpoints, monitoring agents, must have a unique, cryptographically verifiable identity. Service mesh platforms like Istio and Linkerd implement mutual TLS (mTLS) between services, ensuring that both sides of every communication verify each other's identity before exchanging data. Google's BeyondProd framework, which Google uses internally for all AI workloads, demonstrated that mTLS implementation reduces unauthorized lateral movement by 94%.

For human users accessing AI systems, implement phishing-resistant multi-factor authentication (FIDO2/WebAuthn) with continuous behavioral analytics. Microsoft's 2024 Identity Security Report found that FIDO2-based authentication eliminates 99.9% of identity-based attacks compared to password-only schemes, and 98% compared to traditional MFA using SMS or app-based codes.

Implement just-in-time (JIT) access provisioning for AI administrative operations. Data scientists requiring access to training data, model weights, or production configurations should receive time-limited credentials that automatically expire. CyberArk's 2024 Privileged Access Management study found that JIT provisioning reduces the attack surface for privileged AI operations by 76% compared to standing access.

Micro-Segmentation for AI Infrastructure

Micro-segmentation isolates AI workloads into discrete security zones, preventing lateral movement if any single component is compromised. Unlike traditional network segmentation that divides infrastructure into broad zones, micro-segmentation enforces granular policies at the individual workload level.

For AI systems, implement segmentation along four boundaries. The data boundary isolates training data stores from inference environments, preventing production model queries from accessing raw training data. The model boundary separates different models and their associated resources, preventing a compromised model from accessing other models' weights or configurations. The pipeline boundary isolates training and evaluation pipelines from production serving infrastructure. The API boundary controls which external services can communicate with AI endpoints and what data they can exchange.

Illumio's 2024 Zero Trust Segmentation Impact study found that organizations with micro-segmented AI infrastructure contained breaches 68% faster and reduced the average blast radius by 83% compared to flat network architectures. The study also noted that micro-segmentation reduced ransomware propagation time from minutes to effectively zero in 91% of simulated scenarios.

Policy enforcement should be identity-aware, not just IP-based. Define policies that specify "training pipeline X can read from data store Y during scheduled training windows" rather than "subnet A can access subnet B." This identity-aware approach adapts automatically as workloads scale, migrate, or redeploy.

Continuous Validation and Adaptive Access Control

Zero trust replaces one-time authentication with continuous validation. For AI systems, this means verifying every request against current context, user identity, device posture, request pattern, data sensitivity, and threat intelligence, before granting access.

Implement risk-adaptive access controls that dynamically adjust permissions based on real-time risk assessment. A data scientist querying a model from a corporate device during business hours from a known location receives standard access. The same user querying from an unmanaged device, at an unusual hour, from a new geography triggers additional verification steps or restricts access to less sensitive operations.

For AI inference endpoints, implement request-level validation that examines input characteristics before processing. This catches adversarial inputs, prompt injection attempts, and data exfiltration queries at the gateway level. OWASP's 2024 Top 10 for LLM Applications identifies prompt injection as the number-one risk for AI applications, and request-level validation is the primary mitigation.

Continuous monitoring should track behavioral baselines for every AI system component. Anomaly detection flags deviations such as unusual query volumes, unexpected data access patterns, model performance degradation (which may indicate adversarial manipulation), or new communication patterns between services. Palo Alto Networks' 2024 AI Security Report found that behavioral analytics detected 73% of AI-targeted attacks before they achieved their objectives, compared to 31% for rule-based detection systems.

Data Protection Across the AI Lifecycle

Zero trust data protection for AI systems spans the entire lifecycle: collection, preprocessing, training, evaluation, inference, and archival. Each stage requires specific controls.

During data collection and preprocessing, implement data classification that automatically labels data by sensitivity level and regulatory category. Apply tokenization or differential privacy to sensitive fields before they enter training pipelines. Google's 2024 Privacy Engineering Report demonstrated that differential privacy with carefully tuned epsilon values (2.0–5.0) preserves 92–97% of model utility while providing mathematical privacy guarantees.

During training, encrypt data at rest and in transit. Implement secure enclaves (Intel SGX, AMD SEV, AWS Nitro) for training workloads processing highly sensitive data. Confidential computing ensures that not even cloud infrastructure administrators can access training data or model weights during computation. Microsoft's 2024 Confidential AI benchmark showed that secure enclave training adds only 8–15% overhead for transformer-based models.

During inference, implement output filtering that prevents the model from disclosing sensitive training data, personal information, or proprietary content. Apply rate limiting and output monitoring to detect potential model extraction attacks, where adversaries systematically query a model to reconstruct its capabilities. Research from ETH Zurich's 2024 AI Security Lab showed that rate-limited, output-filtered endpoints reduced successful model extraction attacks by 89%.

Supply Chain Security for AI Components

AI systems have complex supply chains: pretrained models, training datasets, software libraries, hardware accelerators, and cloud services. Each link is a potential attack vector.

Implement software bill of materials (SBOM) for all AI components, tracking model provenance (who trained it, on what data, with what configuration), library versions and known vulnerabilities, data lineage from source through preprocessing to training, and infrastructure dependencies and their security postures.

The Open Source Security Foundation (OpenSSF) Scorecard project provides automated security assessments for open-source AI libraries. Organizations using OpenSSF Scorecard evaluations reduced supply chain compromises by 62%, according to the Linux Foundation's 2024 Open Source Security Report.

For pretrained models, verify model integrity through cryptographic hashing and validate model cards documenting training data, known biases, and intended use cases. MITRE's ATLAS (Adversarial Threat Landscape for AI Systems) framework provides a structured approach to identifying and mitigating AI supply chain risks.

Incident Response for AI-Specific Threats

Traditional incident response playbooks are insufficient for AI-specific threats such as model poisoning, adversarial evasion, data extraction, and prompt injection. Develop AI-specific playbooks that address containment strategies including model isolation, endpoint shutdown, and traffic rerouting to fallback systems.

Investigation procedures should include analyzing adversarial inputs, examining model behavior changes, auditing data pipeline integrity, and reviewing access logs for anomalous patterns. Recovery steps must cover model rollback to known-good versions, data pipeline validation and reprocessing, and post-incident model evaluation against adversarial benchmarks.

SANS Institute's 2024 AI Incident Response Survey found that organizations with AI-specific playbooks resolved incidents 58% faster and experienced 41% less data loss than those applying generic cybersecurity playbooks to AI incidents.

Conduct tabletop exercises specifically simulating AI attacks quarterly. These exercises build muscle memory for rapid response and reveal gaps in detection, communication, and recovery procedures before real incidents expose them.

Building a Zero Trust AI Security Roadmap

Implementing zero trust for AI systems is a multi-year journey. Begin with an assessment of current AI assets, data flows, access patterns, and security gaps. Prioritize based on data sensitivity and business criticality, production inference endpoints handling customer data typically warrant immediate attention.

Phase implementation to avoid disrupting AI operations. Start with identity and access management hardening, then layer in micro-segmentation, followed by continuous monitoring and adaptive controls. Each phase delivers independent security value while building toward comprehensive zero trust coverage.

The investment is substantial but justified. Forrester's 2024 Zero Trust ROI study calculated that mature zero trust implementations deliver a 92% reduction in breach impact, a 65% decrease in security operations workload through automation, and a net positive ROI within 18 months when accounting for reduced breach costs, streamlined compliance, and operational efficiency gains.

Common Questions

AI systems present unique security challenges: model APIs process sensitive data at massive scale, training pipelines ingest from diverse sources vulnerable to poisoning, and inference endpoints face adversarial attacks and model extraction. IBM's 2024 report found zero trust reduces AI-specific breach costs by 51% through faster containment and smaller blast radii.

Micro-segmentation isolates AI workloads into four boundaries: data (training vs inference), model (separate models), pipeline (training vs production), and API (external communications). Illumio's 2024 study found micro-segmented AI infrastructure contained breaches 68% faster and reduced blast radius by 83% compared to flat networks.

Continuous validation replaces one-time authentication with real-time verification of every request against current context—user identity, device posture, request patterns, and threat intelligence. Risk-adaptive controls dynamically adjust permissions. Palo Alto Networks found behavioral analytics detected 73% of AI attacks before achieving objectives vs 31% for rule-based systems.

Apply differential privacy during preprocessing (preserving 92–97% model utility), encrypt data at rest and in transit, use secure enclaves for sensitive training (8–15% overhead for transformers), implement output filtering during inference, and apply rate limiting to prevent model extraction—which reduces successful extraction attacks by 89%.

Forrester's 2024 study found mature zero trust delivers 92% reduction in breach impact, 65% decrease in security operations workload, and net positive ROI within 18 months. IBM's data shows breach costs are 42% lower ($3.28M vs $5.65M) for organizations with mature zero trust implementations.

References

  1. Cybersecurity Framework (CSF) 2.0. National Institute of Standards and Technology (NIST) (2024). View source
  2. ISO/IEC 27001:2022 — Information Security Management. International Organization for Standardization (2022). View source
  3. OWASP Top 10 Web Application Security Risks. OWASP Foundation (2021). View source
  4. Artificial Intelligence Cybersecurity Challenges. European Union Agency for Cybersecurity (ENISA) (2020). View source
  5. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.