Login
Padanet
← Back to Writing

AI and job losses: what we are not measuring

The real risk is not replacement. It is losing the ability to see what you are losing.

Block fires 4,000 people. The stock jumps 25%. Jack Dorsey calls it an AI efficiency play. Wall Street applauds.

The pattern is accelerating. And the European response is predictable: regulatory caution, social dialogue, wait and see.

Neither reaction addresses the actual problem.

The deskilling trap

The dominant framing—execution versus judgment—sounds clean. Automate the repetitive, keep the strategic. But this framing hides a dangerous dependency.

Judgment is not an innate trait of senior people. It is a capability built through years of doing the very work being automated. Junior analysts who never write a synthesis never learn to evaluate one. Managers who delegate analysis to AI lose the ability to challenge its conclusions.

When you compress execution, you compress the training ground for judgment. You don't optimize the organization. You hollow it out.

The AI stress test everyone talks about should measure this cognitive erosion risk—not just the headcount savings.

Optimizing blind

Most organizations automating around AI have no real visibility into their workforce capabilities. Skills data is scattered across HR systems, outdated CVs, and managers' intuitions. Annual reviews capture a snapshot from months ago.

In this context, deciding which roles to compress and which to keep is guesswork. Organizations cannot see:

  • Which skills are actually active versus merely declared
  • How capabilities are shifting as AI tools change daily work
  • Where critical knowledge lives and what happens when it walks out the door
  • Whether productivity gains are real or just displaced

You cannot transform what you cannot see. And most organizations are transforming blind.

Trust is the real bottleneck

70% of transformation programs fail (McKinsey). Rarely because the plan was bad. Usually because the people did not follow.

AI transformation has a specific trust problem. Employees who fear surveillance or replacement do not adopt the tools and do not evolve. They comply minimally or use shadow AI—external tools the organization does not control—creating exactly the security and governance risks the transformation was supposed to reduce.

The French "voie entre performance et humanité" is real and worth defending. But it requires more than social dialogue and GPEC obligations. It requires systems that are architecturally incapable of surveillance—where what AI can observe, who controls data, and how decisions are made is structural, not just policy.

The missing step zero

Before mapping functions, before quantifying savings, before building the AI architecture: build the capacity to see. In real time. How skills evolve. How trust holds or erodes. How adoption actually happens versus how the dashboard says it happens.

Without this continuous visibility, every transformation plan is a sophisticated guess. The organizations that navigate this transition will not be the ones that cut fastest. They will be the ones that see most clearly.

The question is not whether AI will reshape work. It is whether your organization has the instruments to navigate that reshaping—or whether you are flying blind.