Why AI Is the Antithesis of Inclusive Marketing
Image Description: Three diverse hands hold one magnifying glass together against a colourful geometric background.
AI is not neutral - it mirrors the biases we’re trying to dismantle through inclusive marketing. The responsibility is on us as marketers to use these tools with intention, so we don’t replicate this unconscious bias.
The Tension
AI promises speed, scale, and personalization. Inclusive marketing demands empathy, context, and equity. At first glance, they look like opposites - and sometimes they are. The truth is, AI often amplifies bias instead of dismantling it. That’s why human oversight isn’t optional; it’s the work.
In my practice, I use a discipline called Strategic Inclusive Marketing to make sure oversight isn’t just “good intentions.” It’s built into process, systems, and checkpoints. That lens helps us understand why AI can be the antithesis of inclusion - and how to use it responsibly, without erasing equity-deserving groups.
Why AI Is the Antithesis of Inclusive Marketing
AI isn’t neutral. It learns from the data we feed it, which means it inherits bias baked into history. UNESCO research found that popular language models associate women with domestic roles four times more often than men, while men were linked to words like “executive” and “career.” And in advertising, algorithms have shown high-paying job ads to men more often than women.
Travel brand Black & Abroad put this bias on full display. When they asked a generative AI to place Black travellers in vacation images, the AI lightened their skin, straightened their hair, or dropped them into scenes of poverty. Why? Because its training data had almost no diverse travel images. The result was erasure - the opposite of inclusive representation. Personally, I’ve felt the impact of this erasure. When I travel, I’m often ‘mistaken’ for staff, a tour guide, or a local. Rarely am I recognized as a tourist. I believe this is a direct result of the erasure of different abilities, cultures, gender identities, and skin colours in travel, airline, hospitality, and tourism campaigns. This is why inclusive marketing and AI clash: one is about broadening representation, the other repeats what it has already seen.
When AI Goes Wrong
The failures pile up quickly when oversight is missing. Microsoft’s Tay chatbot was hijacked by trolls and began spewing racist tweets within hours of launch. Amazon scrapped an AI recruiting tool after discovering it downgraded résumés containing the word “women’s.” And just this year, Vogue drew backlash for running a Guess ad featuring an AI-generated, blonde, blue-eyed model - a move critics said sidelined real, diverse talent.
What these cases share: a lack of structured guardrails. Without humans asking, “Who’s being erased here? Who’s harmed if this goes live?” AI turns into a liability.
When Humans Get It Right
Some of the most successful inclusive campaigns have deliberately minimized or rejected AI. Dove’s “The Code” campaign, for example, highlighted how generative AI produced a narrow vision of beauty - thin, light-skinned, blond women - when asked for “the most beautiful woman.” Dove responded by publishing a Real Beauty Prompt Playbook and pledging never to use AI to generate women’s images. The campaign resonated because it stayed human and authentic.
Fenty Beauty’s launch with 40 foundation shades didn’t need AI. It needed awareness that entire demographics were ignored. That awareness turned into a cultural movement and business success - proof that listening to underserved audiences builds loyalty.
Nike’s “Dream Crazy” with Colin Kaepernick is another example. Critics predicted boycotts. Sales rose 31%. Inclusive marketing rooted in human conviction connected more powerfully than any algorithm ever could.
Using AI for Good (With Oversight)
So where does AI fit? It can support inclusive marketing - but only when humans set the boundaries. Here’s how:
Audit language and imagery: Use AI tools to flag potentially biased words or representation gaps, but always review with human judgment.
Co-create with lived experience: Involve equity-deserving groups in reviewing AI drafts. Representation is not optional; it’s oversight.
Set bias briefs: Document what inclusion looks like for your brand - inclusive language, cultural nuance, accessibility standards and apply that consistently when using AI tools.
Expand accessibility: AI can speed up captions, translations, and alt-text creation. Use it to break barriers, not to cut corners. Be deliberate in your prompts.
Measure representation: AI analytics can help you quantify who shows up in your campaigns. Humans then interpret the results and adjust.
The goal is simple: let AI do the heavy lifting, but keep people responsible for the values.
Bringing the Human Into Oversight
This is where a discipline like Strategic Inclusive Marketing matters. It makes oversight practical by embedding it into everyday processes: Standard Operating Procedures, briefings, creative reviews, and measurement. It ensures that equity-deserving groups are not erased or minimized by the speed of technology.
When marketers adopt this lens, AI becomes a tool to scale inclusion, not to undermine it. It becomes an assistant, not the author. And that’s the balance we need: high-tech capability guided by deeply human judgment.
The Bottom Line
AI doesn’t understand equity. People do (or should). Our job as marketers is to make sure inclusion isn’t an afterthought, even when AI is in the mix. With structured oversight, we can use AI to expand access, audit bias, and personalize responsibly - without erasing the audiences who deserve to be seen.
Inclusive marketing will always be human work. AI can help us go faster, but only if we keep it aligned with our values.