News from the open internet

Marketing Strategy

YouTube’s AI slop problem might be too big to stop

A computer shines its light at a person on their phone casting a shadow of a monster onto the wall.

Illustration by Reagan Hicks / Shutterstock/ The Current

Since May of this year, the verified YouTube channel Abandoned Films — with 360,000 subscribers and 64 million total views — has been uploading what look like Dos Equis commercials in the style of the beer brand’s famous “Most Interesting Man in the World” campaign.

Some are seemingly harmless videos like “The Most American Man in the World,” but others are disparaging, like “The Dumbest Woman in the World” and “The Most Autistic Billionaire in the World,” featuring who appears to be Elon Musk.

At the time of this writing, there were at least 30 similar videos on the channel with tens of thousands of views each. Many of them included “Dos Equis Ad” in the title.

The problem? They don’t seem to be affiliated with Dos Equis, and they were all created with AI. Also, real ads were playing before some of the videos or displayed next to them.

Heineken, the Dos Equis parent company, did not respond to requests for comment.

The videos reflect a much larger phenomenon on YouTube: AI-generated content is surging on the Google-owned platform, blurring the lines between user-generated content and premium, professional content. Experts and insiders say it’s creating potential brand-safety concerns for advertisers; raising questions about YouTube’s monetization policies (and how it enforces them); and disrupting the creator economy in the process.

“I’m hearing advertisers want to identify how much ‘waste’ they have running on YouTube and avoid it,” a brand-safety insider says. “It is surprising to some that they are showing up on this type of content.”

AI could hurt consumer trust in advertisers

An experienced media buyer tells me that in a recent campaign, about 25% of the YouTube videos where ads appeared were AI-generated or enhanced. Is it fair for advertisers to question this? After all, if it’s not hindering results, is it an issue?

Data suggests that AI slop could have an adverse effect on consumer sentiment. A recent Raptive survey of 3,000 U.S. adults found that people were less likely to purchase products advertised on web pages they believed to be AI-generated (even if they weren’t). Respondents found the content “less premium,” and trust dropped by 50% when respondents perceived content as AI-generated.

The Raptive survey found that Gen Z is especially wary of AI-generated content: 52% of respondents in the 18–24 age group said they trust ads less if they appear on websites with AI-generated content.

Anna Blender, SVP of data strategy and insights at Raptive, believes the numbers would be similar if the research had focused on YouTube videos specifically.

“Some advertisers say that fixing this [AI slop problem] could help YouTube, as it could get rid of lower-quality inventory that advertisers currently buy and help them focus on higher-quality inventory that drives more positive results,” the brand-safety insider says.

Google did not return a request for comment for this story. But Google CEO Sundar Pichai has acknowledged the AI issue, noting that YouTube has taken steps to demonetize some AI content.

“Any time there are tech inflection points, it’s always a cat and mouse game,” he said during an interview at the Bloomberg Tech Summit in June. “New technology creates new opportunities, and more people can create content. But you also have a rise in low-quality content.”

He added that the company is using Gemini to “help improve YouTube’s recommendations” and to surface higher-quality content.

Still, in the AI era, where any creator can use the technology to enhance their content, what qualifies as “high” or “low” quality? And what is, or isn’t, okay to be monetized? YouTube, it seems, is still figuring it out.

AI slop is growing fast

Before we get into the specific steps YouTube has taken, it’s worth asking how far the slop is spreading.

According to a recent Guardian analysis, nearly one in 10 of the fastest-growing YouTube channels are devoted entirely to AI-generated content. One of them, Super Cat League, has over 4 million subscribers; others, like Funnymeow (apparently people really love AI cats) and FantasyAI, both had nearly 2 million.

A previous report from Sherwood News found that four of the top 10 YouTube channels with the most subscribers in May featured only AI-generated content — including Masters of Prophecy, a channel of AI-generated music videos with a staggering 32 million subscribers.

A Deadline investigation in March showed just how lucrative and slippery this ecosystem can be. It uncovered several channels pushing “concept movie trailers” — fan-made trailers that utilize AI — raking in millions of dollars in ad revenue. YouTube then turned off monetization for at least two of the channels, but only after Deadline published its findings.

And it’s not all cute cats, junk music or fake movie trailers. AI slop has real-world consequences.

A “true crime” docuseries about a fake Colorado murder that was completely AI-generated gained over 2 million views on YouTube, 404 Media reported earlier this year. It resulted in a flurry of calls to The Denver Post asking why they weren’t covering it.

Jim Louderback, editor of the newsletter Inside the Creator Economy — which has 32,000 subscribers on LinkedIn — believes AI content could account for up to 30% of viewing on YouTube by the end of the decade.

Is the ‘crackdown’ too little, too late?

YouTube recently made minor changes to its monetization policy in a seeming effort to try to quell AI slop, in what some headlines dubbed a “crackdown.” But experts I spoke to suggest it was the bare minimum.

But first, let’s look at how a YouTube channel generates ad revenue in the first place. To qualify for the YouTube Partner Program (YPP) and start making money, a channel needs 1,000 subscribers and 4,000 watch hours over the past year. But since 2021, YouTube has allowed ads to run on all videos — even those not in the YPP. That means a brand can still find themselves advertising alongside low-quality or AI-generated content, whether the channel is getting paid or not.

Though on July 15, YouTube updated its “repetitious content” policy, now rebranded as the “inauthentic content” policy, clarifying that it “includes content that is repetitive or mass-produced.”

The move rattled some creators who do use AI in their content, prompting Team YouTube’s help account on X to issue a public clarification, stating “channels using AI to *improve* their content can still monetize as long as they follow our policies.”

Many industry insiders aren’t convinced this will make a meaningful difference, given the deluge of AI content and YouTube’s acquiescence to the whims of the creator economy.

“Google is dealing with the edge of the problem to say ‘hey, look, we did something,’ but it’s much deeper than that,” says Paul Bannister, chief strategy officer at Raptive.

Louderback, in a recent edition of his newsletter, echoed that concern. “Will YouTube consider fully AI-generated content YPP eligible? This new policy hints at a direction, marking ‘mass produced’ content as non-YPP eligible. But it’s only a baby step. Soon, mass-produced AI stories will rival today’s ‘authentic’ content. What happens then?”

That gray area opens the door for more AI slop to be monetized, and by doing so, “YouTube is incentivizing more of it,” the brand-safety expert says.

Creators might be getting ‘boxed in’

To be fair, YouTube has a tricky balance to strike: It doesn’t want to throttle engagement on the platform — and AI content is becoming harder to avoid. As it gets more accessible (and maybe even more tolerated), the platform must tread lightly.

Louderback explains that YouTube wants to create a highly personalized experience for each viewer, and it’s incentivized to get a handle on the AI surge — but only if it hurts the user experience.

“If the viewer is enjoying the content, the algorithm will probably recommend more content like that,” he says.

That might explain why I’m still seeing AI-generated movie trailers pop up in searches (though I can’t say I’m “enjoying” that).

But the AI sludge also risks diluting creators that have built careers out of making content for the platform with an original, human touch.

Cory Williams is one of those creators. He’s been making content for YouTube for two decades and has built established IP, including Silly Crocodile, on the channel Just For Kids. It has 896,000 subscribers and over 907 million total views to date.

Over the last year, though, he’s seen his characters hijacked by AI-generated copycat videos. And he’s not alone — there’s reportedly been a rampant problem of AI content that targets kids.

“I’ve seen a massive impact on income,” Williams reveals. “I don’t know exactly what is causing it. My theory is that I am being boxed in to the AI stuff.”

Fighting back is exhausting. It would be a full-time job to report all of the AI fakes, he says, and he’s unsure what YouTube would even do about it. De-monetizing is one thing; but it’s another issue entirely when there’s so much repetitious and inauthentic content on the platform that it is harming creators who are being monetized for their “authentic” content.

“I think Google is afraid of what’s coming,” he says, “and I really don’t think they know what to do.”

What might be next?

Many creators have already started diversifying their income beyond social video content — tapping into premium channels like connected TV (CTV) and audio. The AI dynamic could accelerate that, experts say. Streamers like Netflix, Prime Video and Tubi are actively courting creators.

There may even be a silver lining for creators amidst the AI sludge. According to the Raptive survey, so-called AI “power users” — classified as those most familiar with the tools — were more likely to say human interaction boosts trust in content. They also said they craved human-centric content.

“We might see some AI burnout among certain demographics,” Raptive’s Blender says. “Creators could have an opportunity to evolve their brand and be their authentic self.”

Still, industry watchers who spoke with The Current warn that major streaming products will emerge that are based entirely on user-prompted, AI-generated content. We’re already seeing early signs; Adventr, an AI-focused software company, recently launched an interactive streamer that allows for real-time user input.

This kind of tech could sharpen the divide between consumers who actively prefer AI content, and those who want nothing to do with it. It’s another potential sea change that brands, creators and everyone in between will have to navigate.

“AI is doing to content creation what social video did to distribution,” Louderback says. “With AI tools, you can not only distribute to anyone, but you can also create anything you want. And it’s not like that’s going to hit a wall and stop any time soon.”