A wave of short, lifelike videos featuring legendary people who passed away decades ago has taken social media by storm, and it is raising big questions about consent and digital legacy.
OpenAI’s latest app, Sora, has racked up more than a million downloads almost overnight. Users can pop out clever, quirky clips showing stars like Marilyn Monroe or Nat King Cole in offbeat scenarios, but as the videos get more convincing, some families are beginning to speak out.
It is now possible to watch a digital recreation of Aretha Franklin making candles or see John F. Kennedy claiming the moon landing was faked. These are just a few examples of the new frontier of deepfake videos that Sora can create in less than sixty seconds.
The technology’s entertainment value is clear, yet it has thrust copyright lawyers and the relatives of these figures into complicated conversations about posthumous dignity and digital misuse.
Zelda Williams, daughter of the late Robin Williams, drew a line this week. “If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop,” she wrote, sounding exasperated at the repeated recycling of her father’s image.
Spotting Deepfakes and the Challenge of Control
OpenAI has stated that it wants family members and representatives to have meaningful control over the digital use of public figures’ likenesses. For those recently passed, estates can formally ask to remove their loved ones from Sora’s production lineup.
Digital tools are in place to help spot which videos are genuine and which come from Sora’s AI. Each clip comes with a hidden marker, a visible watermark and attached metadata. But Sid Srinivasan, a computer science expert at Harvard, is not convinced that these steps are foolproof for truly determined troublemakers who want to remove the markers.
Platforms like AI video technology advancements and McAfee are turning to new forms of artificial intelligence as shields against this very problem, training their models to pick up audio fingerprints or visual cues in a way that humans simply cannot. Steve Grobman, chief technology officer at McAfee, observed that “new tools are making fake video and audio look more real all the time, and 1 in 5 people told us they or someone they know has already fallen victim to a deepfake scam.”
As these videos continue to blend reality and fiction, trust in media itself may erode. Liam Mayes, who lectures at Rice University, warned that more sophisticated fakes could trigger confusion or even undermine democracies: “We might see trust in all sorts of media establishments and institutions erode.”
OpenAI’s Sora is already at the center of fiery debates. For some, it represents freedom of creative expression and the preservation of history in new ways. For others, it threatens to rewrite history without permission — and that prospect is fueling ongoing backlash from both experts and families, especially as AI-generated video content feeds expand on major platforms.