OpenAI is rapidly adjusting its new video-generation platform, Sora, in response to both legal pressure and user concerns over copyright and likeness misuse. Just days after its invite-only launch, the company is rolling out new features designed to give users more control over how their images and voices are used in AI-created videos. This move comes amid growing scrutiny from the entertainment industry and legal experts, who question whether OpenAI’s initial approach to copyright infringement was sustainable.
The Cameo Feature and Initial Controversy
Sora’s standout feature, “cameo,” allows users to upload videos of themselves for inclusion in AI-generated scenes. This sparked immediate interest, with early adopters creating realistic deepfakes, including one of OpenAI CEO Sam Altman making false claims about rival AI models. While entertaining, the feature raised critical questions about consent, copyright, and the potential for misinformation. OpenAI initially required copyright holders (like film studios) to opt out of having their intellectual property used for AI training – a stance legal experts quickly dismissed as impractical.
“Copyright attaches to works the moment they’re created,” explains Robert Rosenberg, an intellectual property lawyer at Moses and Singer LLP. “Asking creators to proactively opt out was never a viable approach.” OpenAI quickly reversed this position, recognizing the need to align with established copyright law.
New Restrictions and Watermarking
The company is now introducing more granular control. Users can now specify restricted keywords or scenarios where their likeness cannot be used, such as preventing AI-generated political commentary featuring their face and voice. Additionally, OpenAI is making the watermark on Sora-created videos more visible, aiming to clearly identify AI-generated content.
These changes are a step toward mitigating legal risks. The core issue is balancing the platform’s open nature with the rights of content creators. Existing laws, like Section 230 of the Communications Decency Act, shield social media platforms from liability for user-generated content. However, entertainment giants such as Disney and Warner Bros. have already begun suing AI firms for allowing unauthorized reproduction of copyrighted characters.
The Broader Legal Landscape
OpenAI is not alone in facing copyright challenges. The New York Times and other publishers have sued the company, alleging illegal use of proprietary content in its AI training data. Ziff Davis, CNET’s parent company, also filed a lawsuit against OpenAI for similar reasons. These legal battles highlight the fundamental tension between AI innovation and intellectual property rights.
The question now is whether OpenAI’s new measures will be enough to satisfy both individual creators and larger entertainment companies. According to Rosenberg, “The platforms are taking more responsibility, but whether this implementation will meet expectations remains to be seen.”
These adjustments are essential for the future of AI-generated content. The ongoing debate over copyright in AI is not merely legal; it defines the boundaries of creative freedom and innovation in the digital age.





























