Critiqs

Google Uses YouTube Videos to Train AI Models

google-uses-youtube-videos-to-train-ai-models
  • Google uses YouTube videos to train AI models like Gemini and Veo 3 under existing creator agreements.
  • Many creators worry about their videos fueling AI without consent, raising fairness and copyright concerns.
  • YouTube says protections are in place while some creators see opportunity, others fear being replaced.

Some of the most sophisticated artificial intelligence models being developed by Google are learning from a familiar source: the mountain of videos on YouTube.

Google has confirmed it pulls from the vast archive of YouTube content to teach its AI systems, such as the most advanced generative AI unveiled at Google I/O and Veo 3 video generator. However, the company insists that only a fraction of the total 20 billion videos gets used, pointing to creator agreements and internal safeguards.

Many creators and copyright experts say this feels like unexplored territory. Some people in the industry were surprised to learn that their hard work could be teaching a technology that might one day edge them out of the spotlight, all without their direct knowledge or say.

A YouTube spokesperson made it clear where the company stands: “We’ve always used YouTube content to make our products better, and this hasn’t changed with the advent of AI.” The spokesperson added that Google has built new protections to give creators some control over their likeness in the AI era, pledging to continue strengthening these guardrails.

Rising Anxiety Among Creators

It remains unclear just how many or which specific videos get fed into these cutting-edge models. Still, even if only a tiny slice of the archive is processed, that means billions of minutes of content might already be shaping how Google’s AI sees and hears the world.

Some content creators are worried. Luke Arrigoni, CEO of Loti, who focuses on digital identity protection, said, “It’s plausible that they’re taking data from a lot of creators that have spent a lot of time and energy and their own thought to put into these videos. It’s helping the Veo 3 model make a synthetic version, a poor facsimile, of these creators. That’s not necessarily fair to them.”

For every new tool, there’s concern about fake versions of creators surfacing across platforms at an alarming pace. Dan Neely, head of Vermillio, whose company detects content overlap, pointed out that YouTube’s own Veo 3 has generated videos nearly identical to those produced by humans, as measured by Vermillio’s Trace ID tool.

YouTube’s terms of service spell out that by uploading content, users are granting the platform a worldwide license to use that content. This broad agreement, according to some observers, leaves plenty of space for companies like Google to teach their powerful next-generation AIs without requesting additional consent from uploaders.

Meanwhile, some creators look at these developments with a mix of resignation and curiosity. “I try to treat it as friendly competition more so than these are adversaries,” said Sam Beres, who draws an audience of 10 million subscribers on the platform. “I’m trying to do things positively because it is the inevitable — but it’s kind of an exciting inevitable.”

For now, the debate continues, as creators and tech giants face off over where credit, consent, and control begin and end, and attention rises about AI breakthroughs taking center stage at Google events.

SHARE

Add a Comment

What’s Happening in AI?

Stay ahead with daily AI tools, updates, and insights that matter.

Listen to AIBuzzNow - Pick Your Platform

This looks better in the app

We use cookies to improve your experience on our site. If you continue to use this site we will assume that you are happy with it.

Log in / Register

Join the AI Community That’s Always One Step Ahead