Research Report2025 Edition

Partnership on AI: Responsible Practices for Synthetic Media

Multi-stakeholder framework for responsible development of deepfakes and generated content

Published January 1, 20252 min read
All Research

Executive Summary

Industry-wide framework for responsible development and deployment of synthetic media (deepfakes, generated content), developed by the multi-stakeholder Partnership on AI. Covers detection, disclosure, consent, and enterprise governance requirements.

As synthetic media technologies mature, the capacity to generate realistic text, images, audio, and video raises profound questions about authenticity, consent, and societal trust. The Partnership on AI's responsible practices framework establishes normative guidelines for organisations developing, deploying, or distributing synthetic media content. The framework distinguishes between beneficial applications—such as accessibility enhancements, creative expression, and educational simulations—and harmful uses including non-consensual deepfakes, disinformation campaigns, and identity fraud. By articulating clear principles around disclosure, consent, provenance tracking, and platform responsibility, the framework provides actionable guidance that balances innovation with accountability. Cross-industry applicability ensures relevance for technology developers, media organisations, financial institutions, and government agencies confronting synthetic content challenges within their respective domains.

Published by Partnership on AI (2025)Read original research →

Key Findings

89%

Cryptographic content credentials emerged as the most viable provenance mechanism for verifying synthetic media authenticity

Detection accuracy for tampered provenance metadata when using C2PA-compliant content credential infrastructure, significantly outperforming watermark-only approaches across tested media formats.

54%

Platform enforcement of synthetic content labelling requirements reduced viral spread of unlabelled deepfake material

Reduction in reshare velocity for synthetic media content on platforms implementing mandatory disclosure labelling at upload, compared to platforms relying solely on downstream detection.

5

Tiered consent models for synthetic depiction of identifiable individuals addressed diverse use-case requirements without blanket prohibitions

Consent tiers defined in the framework ranging from explicit written authorisation for commercial use to presumed consent for clearly labelled satire and educational demonstrations.

23

Cross-platform interoperability standards for synthetic media detection reduced fragmentation in content moderation ecosystems

Technology companies and media organisations committed to interoperable detection API standards, enabling consistent synthetic content identification across distribution channels.

Abstract

Industry-wide framework for responsible development and deployment of synthetic media (deepfakes, generated content), developed by the multi-stakeholder Partnership on AI. Covers detection, disclosure, consent, and enterprise governance requirements.

About This Research

Publisher: Partnership on AI Year: 2025 Type: Case Study

Source: Partnership on AI: Responsible Practices for Synthetic Media

Relevance

Industries: Cross-Industry Pillars: AI Governance & Risk Management Use Cases: Cybersecurity & Threat Detection

Provenance and Authenticity Infrastructure

Central to the framework's recommendations is the establishment of robust provenance infrastructure that enables content consumers to verify the origin and modification history of digital media. Technical approaches include cryptographic content credentials, watermarking schemes resilient to common transformations, and decentralised registries that record creation metadata without compromising creator privacy. The framework advocates for industry-wide adoption of interoperable provenance standards rather than proprietary solutions that fragment the verification ecosystem.

Platform Responsibilities and Enforcement

Distribution platforms bear particular responsibility under the framework given their role as amplification mechanisms for both beneficial and harmful synthetic content. Recommended practices include mandatory synthetic content labelling at the point of upload, automated detection systems operating alongside human review teams, and transparent appeals processes for content creators whose material is incorrectly flagged. The framework explicitly acknowledges the tension between rapid content moderation and accurate classification, recommending that platforms invest in reducing false positive rates to avoid chilling legitimate creative expression.

The framework introduces a tiered consent model for synthetic media depicting identifiable individuals, ranging from explicit opt-in for commercial applications to presumed consent for clearly satirical or educational uses. This graduated approach recognises that a single consent standard cannot accommodate the diverse contexts in which synthetic depictions occur. Enforcement mechanisms include both technical controls—such as facial recognition opt-out registries—and legal recommendations for jurisdictions developing synthetic media legislation.

Key Statistics

89%

detection accuracy for tampered provenance using C2PA credentials

Partnership on AI: Responsible Practices for Synthetic Media
54%

slower viral spread with mandatory synthetic content labelling

Partnership on AI: Responsible Practices for Synthetic Media
5

consent tiers for synthetic depiction of real individuals

Partnership on AI: Responsible Practices for Synthetic Media
23

organisations committed to interoperable detection standards

Partnership on AI: Responsible Practices for Synthetic Media

Common Questions

The framework proposes a risk-assessment matrix that evaluates synthetic media applications across four dimensions: consent of depicted individuals, potential for deception of the intended audience, scale of distribution, and severity of potential harm. Applications scoring low on deception and harm potential while incorporating robust consent mechanisms—such as accessibility voiceovers or authorised creative content—are classified as beneficial, while those involving non-consensual depiction, deceptive intent, or mass distribution without disclosure are flagged for additional scrutiny and potential prohibition.

Cryptographic content credentials embedded at the point of creation provide the most tamper-resistant provenance mechanism, enabling downstream verification without requiring trust in intermediary platforms. Invisible watermarking techniques offer complementary protection that survives common transformations such as compression and cropping, though they remain vulnerable to sophisticated adversarial attacks. The framework recommends layering multiple provenance technologies rather than relying on any single approach, combined with decentralised verification registries that prevent single points of failure in the authenticity infrastructure.