"OpenAI’s Sora 2 video app went viral but quickly landed in controversy as Hollywood, families, and experts push back against copyright misuse and deepfakes."
OpenAI’s Sora 2 video generation app is dominating tech headlines, but not just for its innovation. Within weeks of launch, the AI video creator went viral — and then spiraled into major legal and ethical controversy. Hollywood studios, digital safety experts, and even families of deceased celebrities have all raised serious concerns about its misuse and policy gaps.
What Made Sora 2 Go Viral?
Launched as a next-generation AI video creation tool, Sora 2 lets users create ultra-realistic short clips from text prompts. The app saw explosive growth, topping Apple’s App Store in its first week with over one million downloads — despite being limited to invite-only access in the US and Canada.
How Do You Get Access?
Sora 2 uses an invite-based system. Access codes are distributed to ChatGPT Pro subscribers and through occasional giveaways. Official links include:
Referral codes are single-use and expire quickly, so users rely on official or community distribution channels.
Hollywood’s Legal Pushback
Not long after launch, users began generating clips using copyrighted characters like James Bond and Mario, and deepfakes of real celebrities. In response, the Motion Picture Association (MPA) accused OpenAI of enabling large-scale copyright violations and demanded immediate reform.
Major studios and talent agencies — including Disney, Warner Bros, WME, and Creative Artists Agency — opted out their clients, demanding stricter protection of likeness rights and fair compensation.
| Issue | Stakeholders | Response |
|---|---|---|
| Copyrighted characters | MPA, Disney, Warner Bros | Demanded removal and proactive moderation |
| Celebrity likeness misuse | Talent agencies, families | Requested opt-in and stronger filtering |
| App moderation gaps | Users, safety experts | Called for policy overhaul |
Families React to Deepfake Misuse
Families of deceased icons like Robin Williams and George Carlin expressed distress over AI-generated deepfake videos featuring their loved ones. Zelda Williams called it “emotionally painful,” urging users to stop sharing such clips. OpenAI stated it respects free speech but will block depictions of recently deceased individuals upon verified requests — though critics say this policy remains vague.
OpenAI’s Policy Reversal and Moderation Woes
Initially, OpenAI used an opt-out model — requiring rights holders to manually request exclusion. Following public outcry, CEO Sam Altman confirmed a shift to an opt-in system, granting creators and studios more control. Yet, users soon found ways to bypass filters by slightly altering character names, reigniting debates about AI safety and moderation quality.
“Sora 2 shows how fast innovation can outpace ethics,” said a digital policy expert from The Guardian.
Safety, Misinformation, and The Road Ahead
Digital safety experts warn that tools like Sora 2 could amplify misinformation, bullying, and fraud through realistic AI-generated videos. Regulators and Hollywood are pushing for clear laws and frameworks to address AI misuse while preserving innovation.
FAQs
- Q1: How can I get a Sora 2 invite?
A: You can request access through OpenAI’s official site or community code threads on Reddit. - Q2: Why is Hollywood against Sora 2?
A: Because users have been generating videos with copyrighted and celebrity likenesses, leading to potential legal violations. - Q3: Is Sora 2 safe to use?
A: The app includes moderation filters, but experts warn they’re still easy to bypass. Use responsibly and follow official guidelines.

