News from the open internet

Opinion

IAB’s new VP of AI on the most urgent AI issues the ad industry should address

IAB's VP of AI, Caroline Giegerich

Sarah Kim / The Current

If AI hasn’t already dominated the advertising industry, then it soon will.

Seventy percent of brands, agencies and publishers have yet to integrate AI into their strategies fully, IAB’s Cintia Gabilan said during a NewFronts keynote address, citing IAB research. But half of those expect to be completely AI-powered by next year, she added.

“The gap between AI adopters and AI skeptics is about to become a competitive fault line,” Gabilan said onstage.

To address AI’s impact on the ad industry, IAB recently hired a VP of AI, Caroline Giegerich, to spearhead the organization’s efforts to help incorporate the technology responsibly.

Giegerich, who was most recently the vice president of global marketing and innovation at Warner Music Group, discussed her role and vision at IAB in an interview with The Current.

“If we don’t establish practical, responsible frameworks now, we risk fragmented practices, brand safety missteps and missed opportunities for publishers and marketers alike,” Giegerich says.

Can you offer more details about IAB’s three-pronged AI road map? Why are they each important?

IAB’s three-pronged AI road map for 2025 focuses on, one, mapping the evolving AI ecosystem for advertising. The IAB AI Ecosystem Map will be a dynamic, regularly updated resource that helps stakeholders track rapidly emerging AI use cases across the entire campaign life cycle from creative, media and measurement.

Two, enabling scalable creative personalization. The IAB Creative Personalization pillar will equip publishers, agencies and brands with best practices to operationalize GenAI creative personalization at scale through improved workflows. And three, ensuring content integrity and quality assurance. The IAB AI Content Integrity initiative delivers best practices and QA frameworks that protect trust, transparency and brand safety as synthetic media proliferates.

What makes now such an important or urgent time to ensure AI is being used practically and responsibly in the advertising industry?

Now is a pivotal moment because AI adoption is accelerating across every layer of the advertising supply chain, from creative production to media buying to measurement. If we don’t establish practical, responsible frameworks now, we risk fragmented practices, brand safety missteps and missed opportunities for publishers and marketers alike.

The urgency isn’t just about managing risk; it’s also about enabling innovation in a way that builds trust. IAB’s role is to help the industry move fast and wisely, with shared guidance that keeps pace with real-world adoption.

What are ways that AI is being used responsibly in the industry right now?

As AI transforms advertising capabilities, responsible implementation is key. Today, contextual AI analyzes on-screen content rather than tracking personal data, while dynamic creative adapts ad elements in real time based on environment and context. Synthetic audiences model behavior using anonymized, AI-generated data, bypassing traditional privacy concerns. And zero- and first-party data amplification enables personalization rooted in willingly shared preferences, without the need for additional collection.

Conversely, has there been anything that has given you pause? What do you think is most urgent to address?

Poor data quality and inadequate security controls risk perpetuating privacy violations while fragmenting AI implementation across platforms creates efficiency gaps and protection vulnerabilities. Widespread knowledge gaps in AI literacy can obstruct responsible adoption, as the evolving regulatory landscape demands continuous compliance vigilance.

The opaque nature of many AI systems raises questions about transparency, and without intentional ethical design, these technologies may risk replicating the same privacy invasions that characterized traditional digital advertising.

The third prong, content integrity and trust, on paper seems like it could be the most difficult. AI comes with challenges regarding brand safety. How do you foresee navigating those challenges? What kind of standards might be implemented?

Absolutely, ensuring content integrity and trust is one of the most complex but essential areas as GenAI adoption accelerates. IAB plans to meet this challenge by developing actionable frameworks around disclosure, quality assurance and synthetic content governance.

This includes clear labeling guidelines for AI-generated assets and support for technical frameworks like Coalition for Content Provenance and Authenticity (C2PA) [which introduces standards for validating the source of content]. By aligning stakeholders around shared protocols across creative formats, we can foster responsible, transparent AI use in advertising.

Are there any lessons from past experiences, like at Warner Music Group, you’re bringing to this new role?

At Warner Music Group, I had the opportunity to work at the intersection of marketing, digital innovation and AI, where I focused on how emerging technologies could amplify artist reach and enhance audience insights.

For example, for the musician Cobrah, we created AI-generated content visuals that matched the creative aesthetic of her music while staying within budget.

One key lesson that stands out is the importance of integrating AI in ways that are both strategic and human-centric. It’s not just about leveraging cutting-edge technology but about ensuring that every AI-driven initiative aligns with broader narratives and resonates authentically with audiences.