A new AI video creation tool from Google is raising serious eyebrows among tech experts and watchdogs.
Users have managed to use the platform to whip up deeply convincing clips that could easily stoke tensions, especially when shared amid breaking news. One expert pointed out that many people might not spot subtle flaws in these short, high quality videos if they’re already caught up in the heat of a current event.
Incredibly realistic scenes have been made with just a simple prompt, ranging from fabricated disasters to fake acts of violence. In one case, investigators successfully generated videos of election workers shredding ballots and even groups of people wreaking havoc at sensitive locations.
The sophistication here is surprisingly advanced. AI-powered video creation platform, the AI tool in question, produces footage with natural dialogue, soundtracks, and effects that are far more believable than previous attempts in this space.
Creators online have used it for everything from goofy miniature films to fake news broadcasts, and it’s already found an audience. Some are deeply worried about the ability for anyone, anywhere, to put these misleading visuals out there with almost no friction.
AI Videos and the Misinformation Threat
When a real car accident occurred in Liverpool, authorities quickly clarified facts to prevent rumors from spreading about who was involved. Meanwhile, Veo 3 was able to create a nearly identical scene, complete with details that could easily heighten biases or stir fears among viewers.
One important detail is that Google says all Veo 3 videos contain a faint visible watermark, along with an invisible tag called SynthID. Unfortunately, experts warn the visible watermark is so small it can be trimmed out by anyone with basic video editing tools.
For $249 monthly, subscribers to Google’s AI Ultra plan in certain countries can use Veo 3, though there are safeguards. The software refused to make certain types of content, including prompts involving real public figures, certain disasters, and specific requests that might be interpreted as hate speech or incitement.
According to Google, they did significant testing and implemented multiple layers of filtering before releasing the model to the public. However, in hands-on trials, provocative clips could still be produced with only minor adjustments to wording.
Some of the generated content veered into controversial territory: scenes of mishandled ballot boxes, factories with unsanitary practices, and angry mobs waving foreign flags. Clips like these can be stitched together into complex fake stories, as one researcher demonstrated, noting just how quickly a convincing hoax can spiral out of control.
Media experts say the danger isn’t just the creation of bad content, but the breakdown of trust in everything people see online. In some cases, real footage has been dismissed as artificial, complicating truth-finding during high-stakes situations.
Legal battles are erupting over AI video generation, including lawsuits from artists concerned about the use of copyrighted material in training data. Congress recently passed new laws addressing explicit deepfake content, but researchers argue this is not nearly enough.
Industry researchers insist that current technical barriers are simply not working and call for tougher rules to actually prevent abuse of synthetic media. As the technology continues to improve, the pressure is building for a meaningful response.