Abstract
Industry-wide framework for responsible development and deployment of synthetic media (deepfakes, generated content), developed by the multi-stakeholder Partnership on AI. Covers detection, disclosure, consent, and enterprise governance requirements.
About This Research
Publisher: Partnership on AI Year: 2025 Type: Case Study
Source: Partnership on AI: Responsible Practices for Synthetic Media
Relevance
Industries: Cross-Industry Pillars: AI Governance & Risk Management Use Cases: Cybersecurity & Threat Detection
Provenance and Authenticity Infrastructure
Central to the framework's recommendations is the establishment of robust provenance infrastructure that enables content consumers to verify the origin and modification history of digital media. Technical approaches include cryptographic content credentials, watermarking schemes resilient to common transformations, and decentralised registries that record creation metadata without compromising creator privacy. The framework advocates for industry-wide adoption of interoperable provenance standards rather than proprietary solutions that fragment the verification ecosystem.
Platform Responsibilities and Enforcement
Distribution platforms bear particular responsibility under the framework given their role as amplification mechanisms for both beneficial and harmful synthetic content. Recommended practices include mandatory synthetic content labelling at the point of upload, automated detection systems operating alongside human review teams, and transparent appeals processes for content creators whose material is incorrectly flagged. The framework explicitly acknowledges the tension between rapid content moderation and accurate classification, recommending that platforms invest in reducing false positive rates to avoid chilling legitimate creative expression.
Consent and Individual Rights
The framework introduces a tiered consent model for synthetic media depicting identifiable individuals, ranging from explicit opt-in for commercial applications to presumed consent for clearly satirical or educational uses. This graduated approach recognises that a single consent standard cannot accommodate the diverse contexts in which synthetic depictions occur. Enforcement mechanisms include both technical controls—such as facial recognition opt-out registries—and legal recommendations for jurisdictions developing synthetic media legislation.