AI Strategy Has a People Problem
And it's all anyone could talk about at Shoptalk 2026.
I’m at Shoptalk this week, and the vibe has shifted.
AI has penetrated the conference floor far deeper than it had a year ago: more vendors, more integrations, more of the stack. But it’s not the breathless adoption pitch anymore. It’s a reputation exercise. “Stop the Slop.” “Hallucinations are for hippies.” The industry already bought it. Now it has to convince everyone else that was a good idea.
The most honest conversations I had this week weren’t about what AI can do. They were about what the people operating it aren’t seeing.
Here’s the number: 82% of marketers think AI benefits consumers. 42% of consumers agree. A 40-point perception gap between the people building the campaigns and the people receiving them, and 51% of marketers are now running campaigns that target AI models and web crawlers rather than humans.
The people closest to the tools have the worst read on their impact.
Enter the doom loop.
Two months ago, I wrote about the entropy problem — that messy taxonomies, mashed-together tech stacks, and product catalogs built on institutional knowledge were never designed to be reasoned with by a machine. That thesis held up this week, but the second bottleneck is worse because it’s harder to see.
We humans develop judgment by sitting with things that don’t resolve neatly. By doing the work badly, often, and usually for a long time. The eight-hundred-and-fortieth subject line teaches you something the first one can’t. The “best practices” that profitably scaled one Google Ads account were toothless in another. The campaign that tested well and bombed in market teaches you that data and judgment are not the same thing.
AI removed that friction. The AI writes the subject line now. The AI drafts the brief and the person approving the output may never have done the work that would tell them whether what they’re looking at is good, derivative, or actively damaging the brand. HBR called this the “judgment paradox”: AI simultaneously increases the need for human judgment while eliminating the messy work through which judgment is built.
The operator loses discernment, the output gets worse, rinse, and repeat. The operator can’t see what’s happening: bad judgment approving bad output generated from bad data and a bad prompt. The doom loop spirals and spirals and no one is looking up.
Anthropologie’s Anu Narayanan put it plainly from the Shoptalk stage: “Data doesn’t tell you everything. It won’t tell you what’s next.” Pinterest’s Matt Madrigal said something similar: “Taste is visual. You know it when you see it, even if you can’t describe it.” Even the platforms trying to encode taste into an algorithm admit it’s intuitive.
Judgment is not something you can shortcut, but it’s the thing we’ve been quickest to devalue.
Meanwhile, consumer enthusiasm for AI-generated content has dropped from 60% to 26% in two years. 52% of consumers actively reduce engagement when they suspect AI content. The audience is voting with their attention, and most organizations can’t hear it because only 19% track AI-specific performance metrics. They automated production without instrumenting quality.
That dissolution of trust with your consumer isn’t going to be easily identifiable in a dashboard, and by the time you do see it the damage will be difficult to undo.
The brands closing the gap.
Some brands at Shoptalk this week were telling a more deliberate story: where the human stays, and why.
Dutch Bros CEO Christine Barone took the main stage: “Emotion is the product, not the coffee.” Every brand says some version of this, but Barone is actually running the operations on it. When Dutch Bros rolled out mobile ordering, she said the time saved gets reinvested by deploying more people to engage with customers in line. Their app surfaces customer names and digital stickers on the barista’s screen, conversation starters so the human at the window is better at the thing that actually matters. They’re a sophisticated technology company (they poached Lululemon’s CTO, they have 15 million loyalty members) but the tech serves the interaction. Revenue up 28% last year, foot traffic up 13.8% while Starbucks and Dunkin’ saw declines.
Macy’s told a similar story. Their “Ask Macy’s” AI tool was built with an explicit constraint: decision-making stays with the customer, never the bot. Chief Stores Officer Barbie Cameron put it simply: “At the end of the day, it all boils down to how you make a customer feel.” They could have built an agent that recommends, selects, and checks out for you. They drew the line at recommendation, because they understand that the moment the machine decides for the customer is the moment you lose the relationship.
Then there’s New Balance, where CEO Joe Preston described 180% growth over five years and eighty new stores in 2025 — his entire growth narrative built on taste (and nostalgia), pricing discipline, and brand conviction without a single mention of AI. Sometimes the most sophisticated technology decision is knowing what doesn’t need it.
The question you should be asking.
I know the pressure. Your board wants an AI strategy. Your CFO wants cost reduction and is eyeing your headcount. The pressure to adopt fast is real and it’s coming from above. I’m not telling you to slow down, but I am telling you that the brands I saw at Shoptalk started somewhere specific: “where does judgment live in our workflow, and who owns it.”
The organizations pulling ahead right now have two things: clean data and someone in the room with enough experience to know what the tool can’t see. Judgment as an operational function, not a soft skill. The person who can look at an AI-generated campaign and tell you it’s technically sound and strategically empty, explain why, and know what to do instead.
That means asking harder questions about your AI implementation than “how fast can we scale this.” Questions like: who on your team has actually done this work by hand, and are they in the approval chain? When your AI generates a brief, who’s evaluating it? Did that person ever write one from scratch? What percentage of your AI drafts get substantially rewritten before they ship? Only 19% of companies using AI in content workflows have any measurement framework that distinguishes output quality from output volume. If your team can’t answer these questions, you’ve automated production without building the feedback loop that tells you whether it’s working.
The brands I watched win at Shoptalk this week aren’t the ones with the most integrations. They’re the ones where judgment and expertise are built into the workflow, embedded in the decisions about what to build, what to buy, what to say, what to leave out, and when.
Clean data underneath, AI enabled, and judgment woven throughout the workflow. That’s it. That’s the stack that works.




