News from the open internet

Opinion

IAB Tech Lab warns: Don’t let AI browsers ‘strip-mine’ the open internet

Q&A: Anthony Katsur, CEO of IAB Tech Lab

The rise of AI browsers represents a seismic shift in how content is discovered, consumed and monetized online. As large language models (LLMs) increasingly power the consumer journey, traditional notions of search and site traffic are being disrupted.

For brands and publishers, this shift raises urgent questions: How will their content be accessed, represented and monetized in AI-driven environments? And, perhaps more critically, who will control the value exchange?

Anthony Katsur, CEO of IAB Tech Lab, believes that without clear industry standards and enforceable frameworks, AI browsers could replicate the centralization of power that’s plagued the open web for years. But there’s still time to shape a better future.

IAB Tech Lab is creating technical guardrails like the LLM Content Ingest API to give publishers and brands more control over how their content is used by LLMs and AI platforms, looking at issues like content tokenization, crawl-based payment models and tiered access to content. Meanwhile, plans for a broader AI task force with publishers, LLM developers and edge computing firms, are already underway, with the goal to bring transparency, attribution and fair compensation to the industry.

The Current caught up with Katsur to hear how AI browsers could reshape the economics of the web, why the threat of a new kind of gatekeeping is real, and what publishers and brands must do to assert their place in this new world.

This interview has been lightly edited for length and clarity.

How will these AI browsers change how consumers search today?

AI browsers have the potential to decentralize how users access information. By leveraging large language models and context-aware interfaces, these browsers offer intuitive, conversation-based discovery that could appeal to younger users fatigued by traditional search interfaces.

But the competition won’t just be about user experience — it will hinge on who controls the underlying content. That’s where we’re focusing. Without enforceable standards for content access, these new interfaces risk centralizing control in fewer hands than ever.

Do you believe AI browsers will bring more ads, like Google is attempting to do?

Absolutely, ad models will follow. In fact, we anticipate that AI browsers will evolve into query-based monetization environments through advertising and e-commerce. Our framework anticipates this shift by proposing models, such as cost per query (and other models), over scraping-based economics. We’re building this infrastructure so that ad delivery in AI environments respects consent, attribution and brand safety.

And, yes, there will be ads, but we believe they must be transparent, standards-compliant and privacy-centric. AI is a powerful discovery interface, but without the proper controls, it could also perpetuate bias or misinformation — especially in advertising contexts.

Could AI browsers help revive the open internet, or are we just moving the gatekeeping from one set of algorithms to another?

AI browsers could help revive the open internet, but only if they are built with clear accountability, permissioning and compensation structures built on open standards that sustain the very content they rely on. Otherwise, we are not reviving the open internet — we’re strip-mining it.

However, this optimistic vision only holds if there is a robust, open and interoperable framework in place to ensure that publishers and media companies are fairly compensated for their content. Without economic incentives, the engines powering these AI systems will run on borrowed fuel. Generative AI models do not create knowledge — they remix it. If original content creation becomes financially unsustainable, the inputs that feed AI will atrophy, degrading the quality of output and, ultimately, hollowing out the web.

That’s a strong statement.

Let’s be clear: The risk of algorithmic gatekeeping is real and is rising rapidly.

So, while AI browsers could theoretically decentralize discovery, they may also centralize control over attention and monetization even further — shifting gatekeeping from human-curated search results and social feeds to opaque AI ranking systems. Without transparency and fair content licensing models, we risk replacing one closed ecosystem with another, equally extractive one.

Our API aims to flip that dynamic. By requiring contractual agreements and allowing the tokenization of content, publishers will finally have audit trails and enforcement mechanisms.  

How can brands work to integrate themselves with AI browsers?

Brands must proactively manage how their products are represented in AI-driven environments. We’ve heard from brands that’ve already seen their products mischaracterized in LLM-generated responses — potentially resulting in a lost sale or a damaged reputation in an instant.

The key is tokenization and brand-suitability metadata. Brands need to partner early, understand the data flows and ensure their content is accessible under appropriate conditions.

What might be the best steps for publishers to work with LLMs?

Publishers must start with control — not optimization. Publishers need to assert content rights through standardized metadata and licensing signals. Participation in industry initiatives, such as Tech Lab’s Content Labeling and AI Ingest API projects, will be key. This empowers publishers to maintain attribution, context, and monetization pathways while enabling LLMs to access content responsibly.

Is there a risk that AI browsers become yet another “walled garden,” despite being pitched as tools for exploration?

Yes, and it’s a serious one. History shows that convenience often leads to consolidation. Zero-click search, hidden crawlers, and misused bots have created an environment where content is devalued.

That’s why our strategy includes a three-pronged approach: technology, a publisher coalition, and updated legislation regarding copyright laws. Without alignment across all three, we’ll simply see the same centralization patterns—only now backed by AI-scale consumption.