The AI Fear Machine: Why the Job Apocalypse Narrative Is Working as Intended.
Scott Galloway doesn't traffic in comfort. The NYU Stern marketing professor and co-host of the Pivot podcast spent nearly two hours on the Diary of a CEO recently dismantling one of the most effective — and deliberately constructed — narratives in business today: that AI is coming for your job, and there's nothing you can do about it.
His core argument deserves more attention than a LinkedIn clip. So here's the version with the context left in.
The catastrophizing is the pitch
When you hear a tech CEO describe AI as an extinction-level threat to human employment, Galloway says you should ask one question: who benefits from that framing?
The answer is the person saying it. The more terrifying AI sounds, the more justified a $200 billion valuation becomes. Disruption at civilizational scale requires civilizational investment. The fear narrative isn't a warning — it's a fundraising mechanism dressed up as candor.
The actual employment data doesn't cooperate with the story. US unemployment is 4.5%. Youth unemployment is 8.8% — slightly below historical averages. New business formation per capita has doubled over the last decade. These aren't the numbers of an economy being hollowed out by machines. They're the numbers of an economy in transition, which is a different thing entirely.
Galloway believes AI will create more jobs than it destroys over the medium and long term. He points to radiology — once identified by AI pioneer Geoffrey Hinton as a profession on the verge of obsolescence. Instead, demand for radiologists has grown. AI accelerated image analysis. Radiologists moved into higher-value diagnostic and consultative work. The role didn't disappear. It upgraded.
That pattern is likely to repeat across more fields than the apocalypse crowd is willing to admit.
Who actually likes AI — and why
Here's where Galloway lands a harder punch. Your view of AI, he argues, is almost perfectly correlated with your income. The only cohort consistently optimistic about artificial intelligence earns over $200,000 a year. Everyone else ranges from skeptical to anxious.
This isn't a gap in understanding. It's a gap in exposure and access. Higher earners are using AI as a productivity multiplier and watching it lift their portfolios as AI-adjacent companies dominate the S&P. Middle-income workers are more likely to encounter AI as a cost-cutting rationale — the reason a role gets restructured, a team gets smaller, or a budget gets redirected.
They're experiencing different technologies even when they're using the same tools.
This matters if you're thinking about AI adoption in your organization. The resistance you encounter from your team isn't irrational. It's often a rational response to a narrative that hasn't been honest with them. When every headline frames AI as a job-destroyer, and the people championing it most loudly are the ones who stand to gain the most financially, skepticism is a reasonable position.
What operators know that the pitch deck doesn't say
I've spent two decades managing retail operations and leading sales organizations across hundreds of locations and thousands of accounts. I've built AI tools, trained them on real business data, and watched them work — and watched them fail when the underlying process wasn't solid enough to build on.
Here's what that experience taught me that the investor narrative never mentions: AI scales dysfunction as efficiently as it scales good process.
That's not a warning against AI. It's a warning against skipping steps. When a business has unclear roles, inconsistent data, or a sales process held together by tribal knowledge and habit, AI doesn't fix any of that. It accelerates it. Bad targeting gets automated. Inconsistent messaging gets generated faster. The gaps in your territory coverage get wider because now you're moving at speed without a map.
I've seen this play out in inventory decisions where AI recommendations were built on years of improperly coded product data. The model was sophisticated. The output was garbage. And because it came from an algorithm, people trusted it longer than they would have trusted a human making the same mistakes.
The fear narrative says AI will take your job. The real risk for most businesses is something quieter: that they'll implement AI on top of operational foundations that were never built to support it, spend the budget, announce the initiative, and wonder eighteen months later why nothing changed.
Galloway is right that the catastrophizing is largely theater designed to move capital. But the practical question for most operators isn't whether to fear AI — it's whether they've done the foundational work that makes AI actually useful. Clean data. Documented process. Clear accountability. These aren't prerequisites that AI eliminates. They're prerequisites that make AI worth having.
That gap between the hype and the execution is where most businesses actually live. And it's the part nobody on the investor circuit is talking about, because there's no valuation story in telling people to fix their processes first.
The episode is worth your time. Diary of a CEO wherever you listen. https://open.spotify.com/episode/3wdHMQ6Tsyz0IExwxpTO0Z?si=d2e11e1a842742b0
Brad Gullion
Founder, Fieldnote
I help business leaders apply AI to improve decision-making, workflows, and performance inside real teams.
Follow for practical insights on what’s actually working—and what isn’t.