Link to home page
Link to home

News from the open internet

Opinion

Ad tech’s alien (intelligence) invasion: Dazzle first, question later

A UFO beaming down in a shape of a megaphone abducting a cursor with a cow pattern.
Christian Ray Blaza / Getty / Shutterstock / The Current

Ad tech has been invaded by aliens. Not the green, extraterrestrial kind, but an intelligence layer the industry didn’t build but imported from the outside.

“It’s unprecedented. … We’re building an advertising system on a completely third-party framework. This is not owned or controlled by the digital advertising ecosystem. And nobody is talking about this,” Anthony Katsur, CEO of IAB Tech Lab who runs the standards body for the digital advertising industry, told The Drum.

So let’s talk about this.

All of ad tech’s pipes, from programmatic, RTB to header bidding and more, were always built within the industry. The rules, however imperfect, were written and owned by the same ecosystem that used them.

Ad tech’s new brain, however, is imported. For the first time, the industry has inserted an external intelligence layer right into its core. The intelligence layer? Claude, OpenAI, Gemini and others. One it didn’t design, doesn’t control, can’t independently audit and hasn’t set standards for.

Yet the industry is happily plugging in these large-scale decision-making systems from companies that don’t operate media, don’t run campaigns and don’t sit inside the commercial realities of the ecosystem. And it’s not just automating media buying but importing these systems to run it, without even agreeing on how to judge whether it’s right.

If that doesn’t feel like an alien intelligence invasion, what does? Not because AI is ET, but because it doesn’t share the industry’s context, incentives or judgment. It produces answers that look right, without any guarantee that they are.

That raises basic questions: What kind of intelligence is this, and under what conditions would you trust it?

You would (hopefully) not let an AI system make medical decisions without oversight. You would (probably) not hand over an investment portfolio to a system whose logic you can’t inspect. In those environments, understanding how decisions are made and what happens when they fail is a requirement.

Now, would you let it route your entire media budget across a supply chain it controls, at machine speed, using logic someone else wrote, with fees you can’t see and accountability that belongs to nobody? In other words, we don’t trust this kind of intelligence where correctness matters. In every other domain where outcomes matter, like engineering, finance, medicine, “it usually works” doesn’t qualify as a standard but a liability.

Dazzle first, question later

At the same time, most of the industry’s conversation is focused elsewhere. “Agentic” has become the 2026 default framing. Platforms are launching AI interfaces that translate prompts into campaigns, plans into execution and strategy into automated decisions. The efficiency gains are (often) real, but the underlying assumptions are rarely examined in detail.

No, there is no shortage of industry discussion about AI and agentic AI. The question is whether the industry is talking about the right things.

Move fast, break things style … the industry is moving noisily. Google made Gemini the operating layer of DV360. PubMatic launched AgenticOS. Yahoo DSP went agentic at CES. Amazon DSP now comes with an AI agent companion. Every announcement gets amplified. Capabilities get described in press releases and repeated without challenge. A lot of it reads like fan fiction and the structural questions don’t get asked (enough).

Whose defaults did the agent ship with? What standard of correctness are these systems held to? When an agent decides where to route your budget, who wrote the logic it’s following? What happens to fees when the buy happens in milliseconds across infrastructure you don’t control? 

These questions are not entirely new, but they become more difficult to answer when decision-making is abstracted away from human operators and embedded in systems that are not directly observable.

A media buyer brings judgment to every decision. They know the client’s risk appetite, which publishers deliver and which game their numbers, that the CMO won’t touch anything that smells like clickbait even when the data says it works.​​​​​​​​​​​​​ That judgment is visible, arguable and challengeable. Ask why, get an answer.

Now replace that buyer with an agent. The agent is faster, but doesn’t have years of context. It has parameters, rules and logic that someone, somewhere, wrote.

But who exactly wrote those parameters? Was it the agency? The platform? The DSP’s default settings that nobody reviewed because the campaign launched before anyone thought to audit the logic? Or increasingly the third-party, off-the-shelf AI tool that the industry has plugged in on top? 

We’ve moved from people who made decisions, visibly, in rooms, to whoever configured the defaults. 

One more thing

Here’s another point for ad tech vendors: AI companies aren’t neutral infrastructure providers. Many of them have clear incentives to participate directly in the advertising market.

OpenAI is already at it. Gemini is sending contradictory signals. Claude says no ads, but who knows. If experience has taught us anything, it’s that all tech companies eventually become advertising companies.

By routing planning, optimization and decision-making through external AI systems, the industry is not just using this infrastructure but training it and effectively teaching future competitors how advertising works. Every prompt, every campaign setup, every optimization decision contributes to their broader understanding of how advertising actually works in practice. These systems observe usage patterns, workflows and outcomes and the line between “just infrastructure” and “learning the operating system of advertising” is thin and getting thinner.

Looks like the aliens didn’t invade ad tech after all. We invited them in, handed them the controls and went back to arguing about whether they’d take our jobs.