Critiqs

Blurring the Lines Crowd Scenes Go Next Level With AI

Blurring the Lines Crowd Scenes Go Next Level With AI
  • AI-created crowds in media are getting harder to tell apart from real ones, causing increased public confusion.
  • Fake crowd images can sway perceptions of influence at events and make genuine photos easier to doubt online.
  • Efforts to label AI images lag as creators and platforms struggle with inconsistent marking and growing misuse.

At a quick glance, the crowd at a recent Will Smith concert on social media seemed electrifying.

But eagle-eyed fans noticed odd things: blurred features and strange hands that screamed digital tampering.

The technology behind these crowd scenes is becoming more sophisticated. latest video generators now create masses of individuals, each with their own unique traits, that move almost believably in unison.

San Francisco-based visual artist and researcher kyt janae calls it “a world where the lines of reality are about to get really blurry” and warns that we’ll soon have to practice verifying what’s real and what’s not. For AI models, creating a single-person image is easy, but crowd simulations require managing countless independent details. That means every jacket, hairline, and gesture must look convincing to fool viewers.

The Power of Crowd Images in Public Life

Crowd images are more than just background detail — they carry meaning in society. At rock shows, rallies, and protests, a packed audience has always been a shorthand for success.

Thomas Smith, CEO of Gado Images, puts it plainly, “AI is a good way to cheat and kind of inflate the size of your crowd.” He explains that crowd size remains a powerful visual metric people use to judge influence.

A Capgemini report indicated almost three quarters of images circulating on social platforms in 2023 were made with artificial intelligence. This rapidly growing trend offers unexpected hazards and creative opportunity. While anyone can now generate a scene of adoring fans, faked masses also make it easy to dismiss real photos as fabrications.

There are already ripple effects. In August, Republican nominee Donald Trump falsely claimed that Kamala Harris’s team had used AI to simulate a sea of supporters.

The problem is compounded by the small screens most people use, says Charlie Fink, a lecturer at Chapman University. “If it looks real, it is real,” he observes, highlighting the way context vanishes on a phone.

Big tech companies find themselves stuck between letting users stretch their creative potential and preventing widespread misinformation. Google DeepMind scientist Oliver Wang says they are making serious efforts to mark images with both visible and hidden watermarks, but concedes the public one is often small and hard to spot.

Platforms such as Meta, YouTube, and TikTok offer varying approaches to labeling, but none have harmonized on a single standard. This patchwork makes it easy for some AI creations to slip by undetected or unlabeled.

Smith, for his part, is leaning into the chaos. In a tongue-in-cheek follow-up, he posted concert footage revealing a crowd replaced with dancing cats. “Crowd was poppin’ tonite!!” he joked, a wink at the increasingly blurred frontier of fame in the artificial era.

SHARE

Add a Comment

What’s Happening in AI?

Stay ahead with daily AI tools, updates, and insights that matter.

Listen to AIBuzzNow - Pick Your Platform

This looks better in the app

We use cookies to improve your experience on our site. If you continue to use this site we will assume that you are happy with it.

Log in / Register

Join the AI Community That’s Always One Step Ahead