Block’s announcement that it will lay off nearly 40% of its workforce landed with a familiar explanation: AI is making the company more efficient. CEO Jack Dorsey framed the cuts as part of a “new way of working,” where smaller teams augmented by artificial intelligence can build and operate products more effectively.
Markets loved the story. Shares jumped immediately. But the deeper question is whether AI is truly the driver—or whether AI is becoming a convenient narrative device for corporate restructuring.
Pandemic Overhiring, Not Automation, Is the Real Denominator
Block’s workforce ballooned from roughly 3,800 employees in 2019 to more than 10,000 by 2025. That’s a hiring surge consistent with the broader tech sector, which expanded aggressively during the pandemic. As Reuters and CBC News have reported, many of the companies now citing AI as the reason for layoffs are the same companies that overexpanded during COVID.
Analysts quoted in the coverage describe Block’s cuts as a mix of “AI efficiency gains” and “overdue cleanup of corporate bloat.” Even Dorsey acknowledged that Block overhired during the pandemic. The difference now is that AI provides a cleaner narrative than “we scaled too fast.”
The Promise of AI Is Doing More Work Than the Reality
Tom Davenport, a leading scholar on AI in the workplace and author of All-in on AI, surveyed more than 1,000 executives as part of his research at Babson College. His findings were striking:
Only 2% of companies making layoffs were doing so because AI had actually replaced work.
The other 98% were cutting staff based on what they believed AI would eventually do. This aligns with research from the MIT Sloan School of Management and the World Economic Forum’s Future of Jobs Report, both of which note that companies often overestimate short‑term automation potential.
Klarna is the clearest example. After claiming AI allowed them to shrink their workforce by 40%, the company later had to rehire because the systems weren’t robust enough. The narrative ran ahead of the infrastructure.
AI Is Becoming a Corporate Narrative Device
Executives have discovered that invoking AI does a lot of rhetorical work:
- Calms investors — “We’re ahead of the curve.”
- Deflects blame — “It’s not mismanagement, it’s automation.”
- Signals discipline — “We’re lean and efficient.”
- Justifies restructuring — “We’re reorganizing around AI-first workflows.”
This mirrors the early cloud era, when “moving to the cloud” became a catch‑all justification for cost‑cutting. But AI carries even more narrative power because it touches not just infrastructure but the definition of work itself.
The Risk: AI Becomes the New “Synergy”
When companies cut thousands of jobs based on what AI might do, they’re not optimizing—they’re gambling. They’re reorganizing around a future that hasn’t arrived yet. Research from Brookings shows that AI adoption is uneven, expensive, and often slower than executives expect.
Davenport’s call for “more precision and analysis” before making AI‑driven layoff announcements is not academic nitpicking. It’s a governance concern. If AI is the justification for structural changes, then the capabilities, limitations, and failure modes of those systems need to be part of the public conversation—not just the press release.
The Work Isn’t Disappearing. It’s Redistributing.
Despite headline‑grabbing cuts at big tech firms, recruiters report that technical roles remain in demand across industries. Companies are hiring people to build, integrate, monitor, and govern AI systems—even as they claim those same systems are making large swaths of work obsolete.
The labour market is shifting, not collapsing. Some roles are being automated, some are being re‑scoped, and some are being created from scratch. The danger is not that “AI takes all the jobs” overnight, but that AI becomes a convenient story for decisions that are really about capital allocation, risk tolerance, and executive incentives.
The Deeper Signal: AI as a Governance Problem
When executives use AI as a justification for sweeping structural changes, it reveals something important: AI isn’t just a technology. It’s a narrative lever that can reshape organizations long before it reshapes work.
That’s why precision matters. That’s why transparency matters. And that’s why we need to distinguish between:
- AI as a tool — real systems with measurable capabilities and constraints.
- AI as a story — a flexible justification for layoffs, restructurings, and “strategic pivots.”
Block’s layoffs aren’t primarily about what AI can do today. They’re about what executives want investors to believe it will do tomorrow—and what they can restructure in the meantime.
For teams building agent‑native tools and local‑first workflows, this is the contrast worth holding: AI should be evaluated on the work it actually does, the autonomy it genuinely enables, and the resilience it adds to real organizations—not on the stories it makes easier to tell when thousands of people are about to lose their jobs.