Sora 2 arrives on Android in the US (and soon in Europe), sparking doubts about ethics, copyright and the legal boundaries of AI. And Italy takes action.
The second version of Sora, launched by OpenAI in just a few days on Android devices in North America, is shattering every record: over one million downloads in the first five days. The app, until a few days ago available by invitation only for iOS (Apple) in the United States and Canada, allows you to create hyper-realistic ten-second videos starting from simple text descriptions.
Basically, just type a phrase like “a dog runs in Central Park” and within a few moments the software generates a clip that seems to be taken from a film. The success was immediate, also thanks to the possibility of easily sharing the videos on social media, but the rise of Sora has already sparked heated discussions about the possibility of recreating real faces, places and characters, even deceased ones, fueling the debate on ethics, copyright and the limits of generative AI.
Impossible videos. Sora 2 marks a clear leap in quality compared to the first model, which was already surprising in itself. In addition to improved visual performance, the AI is integrated with a system that recognizes natural language more precisely and reproduces camera movements, lighting and textures with impressive fidelity. The results are often indistinguishable from real footage and have flooded social networks within hours.
Some videos show reconstructions of scenes with famous people, including deceased singers and actors, so much so that Robin Williams’ daughter has publicly asked to stop sharing “virtual” versions of her father. In other cases, however, users have pushed creativity beyond legal boundaries: a viral deepfake showed the CEO of OpenAI, Sam Altman, conversing with Pokémon characters, making fun of the risks of copyright infringement. Surprised by the boom of the last few days, the company itself admitted that it is still looking for a balance between creative freedom and respect for the rights of the subjects represented.
Thorny questions. The Sora 2 app raised another crucial question: Who is responsible for the content generated? According to OpenAI, the platform is protected by a moderation system and filters that limit abuse, but the cases of deepfake demonstrate that, in all likelihood, these limitations are not enough. Video models, like text models, learn from enormous amounts of data, and distinguishing between legitimate use and illicit exploitation remains complex.
Altman said the company will introduce “more granular control” – the ability for rights holders to decide in detail how, where and by whom their characters or protected content can be used.
At the same time, the CEO hinted at the introduction of a revenue sharing mechanism in the near future (“revenue sharing“) which would allow part of the earnings generated to be redistributed to those who wish to authorize the use of their characters or works. Meanwhile, the viral success of Sora 2 is also putting OpenAI’s servers to the test, overwhelmed by user requests, with evident slowdowns in the production of results.
Italian record. While Sora 2 lands on Android – with the debut expected soon (on a date not yet announced) even outside North America – Italy is already regulating the phenomenon. A new law on artificial intelligence has been in force since 10 October which introduces the crime of deepfake: anyone who disseminates, without consent, falsified videos, audio or images that harm a person risks one to five years in prison.
The rule, the first of its kind in Europe, fills a legal void in an era in which tools such as Sora and Sora 2 make visual manipulation within everyone’s reach. The provision aims to protect not only individual reputation, but also human creativity, threatened by increasingly powerful algorithms, to the point that in a few years an artificially generated video could no longer be distinguishable from a real one, at least to the naked eye.
