Synthetic Talent, Frontier AI, and the AI Law in 2025: From Tilly Norwood to California’s SB-53 — A Cross-Border Playbook for Studios, Platforms, and Brands
Updated: 2 October 2025
Executive Summary
The public unveiling of the virtual “actress” Tilly Norwood in late-September 2025 and the rapid advance of multimodal generators (e.g., video systems capable of photorealistic outputs) have forced a single, urgent question onto the desks of studios, streamers, agencies, brands, and tech platforms: how do we operationalise lawful, safe, and ethical “synthetic talent” at scale—without infringing rights or breaching emerging AI safety laws?
This article provides a practitioner-grade, cross-border roadmap centred on:
- California’s SB-53 (2025) on “frontier AI safety and transparency,” a likely global bellwether for high-risk model governance.
- U.S. legal pillars affecting synthetic performers: right of publicity, copyright, neighbouring rights, FTC advertising rules, and SAG-AFTRA obligations.
- French and EU rules: right to image and privacy, performers’ neighbouring rights, text & data mining (TDM) opt-outs, GDPR, and EU AI Act transparency duties for synthetic content and deepfakes.
- India’s evolving context: extra-territoriality, safe-harbour debates, DPDP Act 2023, and Copyright Act intersections as Indian courts and regulators grapple with cross-border model training and platform liability.
- Contracts, policies, and product controls you can deploy now: talent and data licensing, training-use representations, provenance/watermarking, safety evaluations, misuse response, and audit trails.
We conclude with decision matrices, sample clause language, red-flag checklists, and a comparative table of obligations by jurisdiction. Where appropriate, we also flag reputational and competition-law angles that often get missed when teams focus only on copyright or privacy risk.

If you’re building or commissioning synthetic performers, this is your all-in-one implementation guide for 2025. For strategic advice or pilot reviews, you can reach our team at Tahmidur Remura Wahid (TRW) Law Firm via our site: tahmidurrahman.com.
1) The Moment: Why Tilly Norwood Matters
Tilly Norwood—a fully synthetic, agency-ready “actress” created through a private studio pipeline—signals a shift from AI-assisted post-production toward AI-native performers as commercial properties. The timing is notable:
- Post-strike environment: After the 2023–24 Hollywood labour disruptions, SAG-AFTRA secured guardrails on “digital replicas” and “synthetic performers.” The prospect of an avatar signing with a talent agency tests those boundaries.
- Parallel music trends: Synthetic singers (e.g., recent headlines around AI-native “artists”) are moving from novelty to revenue-backed deals.
- Consumer trust cliff: Audiences tolerate CGI; they resent deception. Labeling, disclosure, and truth-in-advertising will be decisive for adoption.
- Model capability shock: New video models produce convincing long-form sequences. As realism surges, so does the likelihood of likeness misappropriation and source-data disputes.
In short, even if your organisation is not signing a virtual star today, your contracts, compliance architecture, and brand safety stack must already anticipate synthetic talent in campaigns, films, games, sports rights activations, and influencer programs.
2) California’s SB-53 (2025): What “Frontier AI” Compliance Looks Like
California’s SB-53 (enacted 2025) squarely targets frontier models—systems above defined compute/capability thresholds whose misuse could create material risks (deception at scale, bio/chemical assistance, critical infrastructure manipulation, or severe content harms). Expect rulemaking and enforcement to evolve, but organisations should plan around six pillars that SB-53 brings into view:
2.1 Model Risk Classification
- Thresholds (compute/training-run or empirical capability) draw a line between general-purpose and frontier models.
- If your model crosses these triggers, risk-control duties apply across the lifecycle (training → tuning → deployment).
2.2 Pre-Deployment Safety Evaluations
- Documented evals and red-teaming for deception, autonomy/agentic behaviours, unsafe bio/chem, and system prompt-injection resilience.
- Maintain a Model System Card capturing training data sources (at category level), safety mitigations, and known residual risks.
2.3 Transparency & Content Provenance
- Clear labelling for AI-generated audio-visual outputs likely to be confused with reality.
- Watermarking and/or C2PA-style provenance metadata where technically feasible; maintain tooling attestations in your logs.
2.4 Containment & Misuse Response
- Structured incident reporting, kill-switch or capability throttling, downstream API policy enforcement, and partner audit rights.
- Developer/Customer Acceptable Use Policies (AUPs) explicitly banning impersonation and unconsented likeness synthesis.
2.5 Data Governance & IP
- Training-data source hygiene (license, TDM-exception compliance, opt-out honoring) and dataset bills of materials (DBOMs).
- Retention/deletion policies for copyright takedown compliance and dataset purges upon settlement or court order.
2.6 Accountability & Penalties
- Executable governance maps: who signs off on evals, who handles notices, who files reports.
- Penalties scale with harm; record-keeping is your first line of defence.
Why SB-53 matters globally: As with California privacy and auto emissions, SB-53 is poised to radiate into platform policy, vendor diligence, and procurement checklists worldwide. Even non-U.S. productions with California distribution or partners will feel the pull-through.
3) U.S. Legal Framework for Synthetic Performers (2025)
3.1 Right of Publicity (State-Level)
- Protects name, image, likeness (NIL), voice, signature—and in some states distinctive gestures or persona—against unauthorised commercial use.
- California and New York provide robust protections; many states recognise post-mortem rights.
- Composite training that outputs a recognisable living person can trigger claims even where no single source dominates.
- Risk controls: consent-based NIL licences; recorded-voice synthesis addenda; dataset filters; style-of disclaimers (“in the style of” ≠ a defence if the output evokes a specific identifiable individual).
3.2 Copyright & Authorship
- Human authorship is the touchstone; non-human outputs alone lack federal copyright.
- For synthetic performers, contract is king: ownership and exploitation flow from the pipeline agreements (model builder ↔ studio ↔ agency ↔ brand).
- Derivative-work and reproduction claims may arise from training or from generated outputs that are substantially similar to protected works.
3.3 Performers’ Neighbouring Rights
- Unauthorised extraction of performances (expressions, timing, voice) from recordings used to clone an actor can violate neighbouring rights.
- Union agreements and individual work-for-hire terms should specify any synthetic reuse, including scope, duration, compensation, and revocation.
3.4 Unfair/Deceptive Practices: FTC
- Truth-in-advertising: disclose when an endorser is synthetic; avoid claims implying human experience or endorsement.
- Material connections and astroturfing rules still apply to synthetic influencers (disclosure labels, audience clarity).
3.5 Labour/Union Rules: SAG-AFTRA
- Advance notice and meaningful consent for digital replicas and synthetic performers.
- Separate payment schedules for capture, generation, reuse, and derivative applications; opt-outs and limited scopes are common.
4) French & EU Framework
4.1 Right to Image & Privacy (Civil Code, Art. 9)
- Strong image rights allow individuals to restrain unauthorised uses of their likeness—even in composite avatars that remain recognisable.
- Claims may also proceed under “parasitism” (unfair riding on another’s reputation or persona).
4.2 Performers’ Neighbouring Rights (IPC)
- Actors and voice artists hold neighbouring rights in their performances; digital imitation can trigger enforcement when it appropriates expression rather than ideas.
4.3 EU Copyright & TDM (Directive 2019/790)
- Research and commercial TDM exceptions exist, but rightholders can opt out—especially for commercial TDM.
- Respect TDM opt-outs (robots.txt / rights metadata) and maintain TDM compliance logs.
4.4 GDPR & Biometric Data
- Images, voiceprints, motion-capture can constitute biometric data if used for identification; processing requires a lawful basis, heightened safeguards, and often a DPIA.
- International data transfers (e.g., to non-EEA GPU farms) must use adequacy or appropriate safeguards (SCCs + TIAs), especially for sensitive or large-scale processing.
4.5 EU AI Act (2024–2026 Phased-In)
- Transparency: disclose AI-generated or deepfake content, with clear, visible user notices; maintain technical means for content traceability.
- High-risk & general-purpose model rules: risk management, data governance, logging, cybersecurity, evals, and post-market monitoring.
- Synthetic performers in audiovisual: producers must implement on-screen or metadata labels; platforms must support provenance persistence.
5) India’s Evolving Context (2025)
India is rapidly clarifying how global AI services interact with domestic law:
- Copyright Act: disputes over model training on India-origin works, and whether style-mimicry or output substitution causes market harm.
- IT Act safe harbour (Section 79) & intermediaries: whether AI platforms qualify for safe harbour and to what extent due diligence rules require takedowns, watermarking, or origin disclosure.
- DPDP Act 2023: notice and consent regimes, purpose limitation, children’s data, and cross-border transfer controls; sectoral rules expected.
- Jurisdictional stance: companies arguing no training occurs in India may still face effect-based jurisdiction if infringing outputs harm Indian rightholders or consumers.
- Advertising Standards & consumer protection: disclosure duties when the “endorser” is not human.
Takeaway: Expect more Indian suits over music, film dialogues, and celebrity voices, and greater scrutiny of platforms that host or monetise synthetic endorsements aimed at Indian consumers.
6) Strategic Risk Map for Synthetic Talent
6.1 Core Risk Buckets
- Likeness & Persona: recognisability triggers right-of-publicity/right-to-image claims.
- Training Data & TDM: unlawful acquisition/ignored opt-outs; contaminated datasets.
- Output Similarity: substantial similarity to protected works; voice-clone confusion.
- Labour & Consent: missing union notices, absent performer approvals, unclear residuals.
- Privacy & Biometrics: DPIA gaps; no legal basis for capture or motion-data reuse.
- Transparency & Consumer Protection: undisclosed synthetic endorsements.
- Safety & Security: deceptive media misuse; inadequate provenance; jailbreaks.
- Competition/Passing Off: parasitic conduct; unfair competition against human talent.
6.2 Risk-Scored Controls (Deploy Now)
- NIL & Performance Licences (U.S./EU/India variants) with synthesis-specific consents.
- Training-Use Representations & Warranties from vendors; indemnities for dataset claims; purge obligations upon valid notice or settlement.
- Provenance & Watermarking (C2PA/XMP) and disclosure UX playbooks—trailers, credits, ad tags, and platform-level labels.
- Eval/Red-Team Protocols for deception, voice-clone misuse, and impersonation prompts.
- DPIA Library: templates for motion capture, voice, and facial rig pipelines.
- Union Compliance Calendars: notice, review windows, capture approvals, compensation tables.
- Incident & Takedown SLAs: 24–72h windows; rights-holder priority queues; evidence locks.
- Audit Trails: dataset DBOMs, fine-tune manifests, prompt/output logs (privacy-safe).
7) Contracts You Need in 2025 (with Clause Starters)
These samples are illustrative starting points and must be localised per jurisdiction, union rules, and your deal economics.
7.1 Talent Engagement (Synthetic Performer Creation)
- Grant of Rights (Synthesis-Specific):
“Performer grants Producer the limited right to capture, model, synthesize, and render a digital performance substantially evoking Performer’s likeness, voice, gestures, and mannerisms (the ‘Digital Persona’) solely for the Project, for the Territory/Term specified, excluding political uses and categories listed in Schedule A.” - Quality/Deception Guardrails:
“Producer shall implement conspicuous audience disclosures wherever the Digital Persona appears and shall not depict Performer engaging in acts reasonably likely to expose Performer to public contempt, scandal, or ridicule, unless specifically approved in writing.” - Residuals & Reuse:
“Any reuse outside the Project requires separate written consent and triggers fees per Schedule B (media, territory, duration). A kill-switch endpoint shall disable further renders upon consent withdrawal, without prejudice to already-distributed materials.”
7.2 Studio ↔ Model Vendor (Frontier Model Use)
- Training & Data Warranties:
“Vendor represents it has obtained, and will maintain, lawful bases for all training and tuning data, including compliance with TDM opt-outs and any NIL licences where identifiable persons are reasonably inferable. Vendor will purge specified datasets within 10 business days upon validated notice.” - Safety & Provenance:
“Vendor shall maintain content provenance and output-traceability features and provide Studio with eval reports covering impersonation, deepfake misuse, and deceptive media generation.” - Indemnities & Caps:
“Vendor shall indemnify Studio for third-party claims alleging likeness misappropriation, copyright infringement, or TDM violation stemming from Vendor’s training data or system outputs, subject to caps not less than 2× total fees and carve-outs for wilful misconduct.”
7.3 Brand/Advertiser ↔ Agency/Platform
- Disclosure & Labelling:
“Agency shall ensure that all synthetic talent is clearly disclosed to consumers in accordance with applicable FTC/EU/India rules; missing or obscured labels are a material breach.” - Geo-Fencing & Rights Territories:
“Synthetic-talent assets will be geofenced per licensed territories; Platform shall block user-initiated relabelling that removes synthetic-origin notices.”
8) Product & Policy Blueprint (Studios, Platforms, Agencies)
8.1 Product Controls
- Creator Toggles: require “Synthetic Performer” flags at render time; default to on for realistic faces/voices.
- Similarity Safeguards: thresholds to block outputs too close to living celebrities unless licence hash is present.
- Auto-Disclosure: on-asset badges + API headers; studios can pull proof-of-provenance for audits.
- Prompt Interdiction: refusal patterns for “make it sound exactly like [X]” unless a licensed voiceprint token is supplied.
8.2 Policy
- No Personation: AUP bans of unconsented personation; appeals path for satire/commentary within legal limits.
- Notice-and-Action: accelerated takedown lanes for performers and rightholders, with shadow archive for evidence.
- Public Registry (Optional): hash of licensed models/personas for platform-to-platform checks.
9) Operationalising SB-53 + EU AI Act Together
- One Evidence Stack: unify your safety evals, system cards, DBOMs, and DPIAs so they satisfy California + EU documentation.
- Dual-Track Labelling: visual label schemes that meet FTC clarity and EU AI Act wording; store hash-bound attestations.
- Jurisdiction Flags: per-market toggles for TDM opt-out enforcement, GDPR transfer controls, SAG-AFTRA terms.
- Third-Party Audits: schedule pre-release audits for frontier-tier models and post-market monitoring cadences.
10) Misconceptions to Drop in 2025
- “We trained on public web pages, so we’re safe.”
Public availability ≠ licence. TDM regimes and NIL laws still bite. - “We say ‘AI-inspired’, so no publicity issues.”
If outputs evoke a specific, recognisable human, you need consent or a defence (e.g., protected parody) that holds. - “Outputs lack copyright, so nobody can sue us.”
Training, similarity, neighbouring-rights, unfair competition, and deception claims remain on the table. - “Label once in the metadata.”
Labels must be conspicuous to audiences; metadata alone is not enough. - “Union rules don’t apply if we generate from scratch.”
Synthetic performer provisions can still trigger notice/compensation where a production uses synthetic cast in roles traditionally occupied by humans.
11) Governance for Boards & GCs
- Charter: adopt a Responsible AI Charter with governance areas (safety, IP, privacy, labour, security, competition).
- RACI: map accountability for model approvals, rights clearance, labels, and incident response.
- KPIs: time-to-takedown, label coverage %, eval pass rates, % licensed datasets, % vendors with audits.
- Budgeting: allocate for licensing, evals, audits, and purge costs (post-settlement dataset deletions).
12) Red-Flag Checklist (Pre-Greenlight)
- ⬛ Talent likeness/voice licences in hand and verifiable.
- ⬛ Union notifications completed; compensation schedules agreed.
- ⬛ Training data DBOM shows TDM compliance and opt-outs honoured.
- ⬛ Eval reports for deception/impersonation risks signed-off.
- ⬛ Disclosure UX tested and A/B verified for comprehension.
- ⬛ DPIA completed for any biometric elements; cross-border transfer basis documented.
- ⬛ Incident runbook rehearsed (contact points, kill-switch, press lines).
- ⬛ Indemnities and caps aligned with risk; insurance endorsements updated.
13) FAQs (2025 Edition)
Q1: Can we say “in the style of [famous actor]” without permission?
You can say it; you likely can’t ship outputs that evoke a recognisable person for commercial use without consent. Labeling doesn’t cure misappropriation.
Q2: If an AI video is 100% synthetic, do we still need to label it?
Yes, in many jurisdictions you must disclose synthetic/deepfake content to avoid deception—EU AI Act and SB-53-style norms make this best practice.
Q3: Do we own a synthetic character we generate?
You can contractually own the character IP bundle (name, model weights/checkpoints, rig, textures) and the brand/trademark around it. But copyright in raw outputs may be limited without sufficient human authorship—so rely on contracts + trade marks + technical control.
Q4: Can we train on Instagram/TikTok videos?
Not safely without licences or a defensible exception (and compliance with opt-outs). Platform ToS often prohibit scraping/training.
Q5: How do we pay synthetic performers?
You pay human contributors (actors whose data seeded the model, voice donors, motion talent) per licences/residuals; you also budget for model vendor fees and reuse licences.
14) Implementation Timeline (90-Day Sprint)
- Weeks 1–2: Inventory projects; set risk tiers; freeze feature scope for high-risk launches.
- Weeks 3–4: Execute NIL/voice licences; vendor RFP addenda with training warranties; kick off DPIAs.
- Weeks 5–6: Run safety evals/red-teams; integrate provenance; design label UX.
- Weeks 7–8: Draft/execute union notices; finalise indemnities; build takedown SLAs.
- Weeks 9–10: Conduct cross-functional tabletop (misuse scenario); sign off system cards and DBOMs.
- Weeks 11–12: Launch with monitoring, post-market feedback, and quarterly audit cadence.
15) Case Study Patterns (What We’re Seeing)
- Studio A: moved to licensed voice banks + motion libraries with revocable tokens. Result: faster clearances, lower controversy.
- Brand B: synthetic influencer pilot in two markets; label comprehension improved trust metrics by 17% vs. unlabelled A/B.
- Platform C: instituted style-of blocks for top-1,000 celebrity names unless licence tokens present; takedowns dropped 43%.
- Agency D: created a “human-in-the-loop” authorship layer (storyboards, selection, edits) to reinforce protectable elements in campaigns.
16) The Ethical Lens (Not Optional)
Even perfect compliance can fall short if audiences feel deceived or workers feel replaced without dignity or fair compensation. The winning organisations pair legal hygiene with:
- Value statements on augmentation over substitution.
- Creator funds or royalty-sharing models.
- Robust disclosures (on-screen, in credits, and in press materials).
- Cultural impact reviews alongside legal sign-off.
17) How TRW Law Firm Can Help
Our cross-border team (Dhaka · London · Dubai · USA) supports studios, platforms, brands, and rights-holders with:
- Frontier AI compliance packs (SB-53 / EU AI Act) and System Card authoring.
- Talent, vendor, and platform contracting (NIL/voice licences, derivative uses, residuals, indemnities).
- TDM-compliant training programmes and dataset remediation (including purge workflows).
- DPIAs/TRA for biometrics, voice, and motion pipelines; transfer impact assessments.
- Union compliance and production-side playbooks.
- Content provenance and label UX audits for ads, film, and social.
Explore our broader technology and IP insights at tahmidurrahman.com.
18) Conclusion
The arrival of synthetic talent is not a thought experiment—it is a production reality. California’s SB-53 crystallises frontier-model obligations; the EU AI Act cements transparency and risk management; U.S. publicity and neighbouring-rights and EU image/privacy law remain formidable; India is accelerating its enforcement stance. The path forward is practical: licence what you need, label what you make, log what you do, and respect the humans whose creativity built your pipelines.
Handled well, synthetic performers will expand storytelling and commerce without erasing authentic human craft. Mishandled, they will invite litigation, consumer backlash, and regulatory penalties. Your governance choices in 2025 will decide which future you inhabit.
19) Structured Reference Table (Quick-Glance)
| Topic | California (SB-53 / State Law) | U.S. Federal | EU / France | India | Practical Controls |
|---|---|---|---|---|---|
| Likeness / Persona | Strong right of publicity; post-mortem rights | Varies by state (federal unfair competition may apply) | Right to image (Civil Code Art. 9); unfair competition/parasitism | Personality rights via privacy/publicity jurisprudence | NIL licences, similarity blocks, consent logs |
| Copyright / Outputs | State remedies for misappropriation; outputs governed by contracts | Human authorship required; training disputes (fair use vs infringement) | TDM with opt-outs; copyright & neighbouring rights | Copyright Act; fair dealing limited; output substitution harms | DBOM, TDM compliance, purge workflows |
| Performers’ Rights | State publicity + contracts | Neighbouring rights; union CBAs | Performers’ neighbouring rights (IPC) | Performers’ rights recognised; contract heavy | Residuals tables, role-based consent, reuse approvals |
| Transparency / Deepfake | Label synthetic content; provenance encouraged | FTC truth-in-advertising | EU AI Act: deepfake disclosure & provenance | ASCI/consumer protection; disclosure norms rising | On-screen labels, provenance (C2PA), policy enforcement |
| Privacy / Biometrics | CCPA/CPRA (if applicable) | Sectoral (HIPAA, COPPA), state biometrics laws | GDPR (SCCs/TIAs; DPIA for biometrics) | DPDP Act 2023, sectoral rules | DPIAs, minimisation, transfer safeguards |
| Frontier Model Safety | SB-53 evals, risk management, incident reporting | NIST AI RMF best practices | GPAI/High-Risk rules under EU AI Act | Guidance evolving; CERT-In, MeitY | Evals, red-teaming, kill-switches, audit trails |
| Labour / Union | SAG-AFTRA synthetic performer rules | NLRB/CBAs landscape | National labour & collective agreements | Labour codes; unionisation varies | Notice calendars, consent windows, fee matrices |
| Enforcement & Penalties | State AGs; civil actions | Federal agencies; courts | DPAs, national courts, Commission | Courts, regulators | Comprehensive evidence stack, counsel hotline |
Contact TRW Law Firm
Phone: +8801708000660 · +8801847220062 · +8801708080817
Email: [email protected] · [email protected] · [email protected]
Global Offices:
- Dhaka: House 410, Road 29, Mohakhali DOHS
- Dubai: Rolex Building, L-12 Sheikh Zayed Road
- London: 330 High Holborn, London WC1V 7QH, United Kingdom
For tailored guidance or a rapid compliance assessment of your synthetic-talent pipeline, visit tahmidurrahman.com.
