Originally published January 16, 2026 · Edited February 24, 2026 · About our editorial process
The tools are changing. The questions remain human.
Every few months, a new AI capability emerges that makes yesterday’s impossible today’s routine. Documents summarized in seconds. Data analyzed while you wait. Communications drafted before you finish your coffee.
And with each advancement, the same anxiety ripples through organizations: What does this mean for my role? For my team? For the expertise I’ve spent decades building?
The Fear Is Real, and Misplaced
Research on AI adoption reveals a striking perception gap: leaders and employees are experiencing technological change very differently, and the anxiety runs deeper than most organizations acknowledge. Leaders report being among the most anxious, perhaps because they see both the potential and the uncertainty more clearly than others.
But here’s what the anxiety often misses: AI excels at processing information. It struggles with meaning.
What AI Handles
AI is genuinely transformative for tasks that involve pattern recognition at scale, finding connections across thousands of documents that no human could read in a lifetime. Information synthesis, pulling together data from multiple sources into coherent summaries. Routine communication, drafting standard responses, formatting documents, handling predictable exchanges. Analysis acceleration, running scenarios, modeling outcomes, processing variables faster than any spreadsheet.
These capabilities are real. They’re changing how work gets done. And they’re not going away.
What AI Can’t Touch
But AI cannot tell you what matters.
It can analyze your organization’s data and identify patterns. It cannot tell you which patterns deserve your attention based on what you’re trying to build. It can draft a communication. It cannot know whether this moment calls for transparency or discretion, urgency or patience, directness or nuance. It can model outcomes. It cannot weigh those outcomes against values that only you can define.
The question AI can’t answer: What do you actually care about protecting when resources are constrained and priorities compete?
That’s a Define What Matters question. And no amount of processing power makes it easier. It requires the slow, difficult work of clarifying values under pressure, in the room where the pressure is real.
The Real Disruption
The leaders most disrupted by AI aren’t those whose tasks get automated. They’re those who never developed clarity about what they’re actually trying to accomplish.
When your value came from information access, AI is threatening. When your value comes from judgment about what to do with information, AI is a tool. When your authority came from being the person who knew things, AI is competition. When your authority comes from being the person who helps others navigate uncertainty, AI is irrelevant to your core contribution.
Circle of Safety and the Clarity That Comes With It
What Simon Sinek calls Circle of Safety becomes more important, not less, as AI handles more routine work. When people can’t tell what their contribution means, when their role feels precarious or replaceable, the instinct is to protect rather than contribute. They hoard information. They stop raising concerns. They optimize for looking productive rather than being useful.
Leaders who create environments where people feel secure enough to ask hard questions about their own roles, and to receive honest answers, are building the conditions where AI becomes an asset rather than a threat. The technology doesn’t determine whether people can engage honestly with uncertainty. The environment does.
Psychological Readiness for Perpetual Change
A pattern has become clear across years of peer conversations with practitioners navigating disruption: the leaders who adapt aren’t necessarily the most technically skilled. They’re the ones who’ve done the foundational work of knowing what they’re trying to protect. They can evaluate new tools quickly because they know what problems they’re solving. They can delegate to AI confidently because they know which decisions require human judgment. They can help their teams navigate anxiety because they’ve processed their own.
This is what psychological readiness for perpetual technological change looks like. Not mastering every new tool, that’s impossible. Developing the clarity that lets you evaluate tools as they emerge.
The Methodology Still Works
The Shift That Sticks wasn’t designed for AI disruption specifically. It was designed for navigating change when the ground keeps moving.
Face the Truth helps you see what’s actually happening with AI in your context, not the hype, not the fear, but the real implications for your specific situation. Define What Matters clarifies what you’re protecting regardless of which tools emerge. That clarity doesn’t change when the technology does. Make It Real builds systems that execute your priorities, whether those systems involve AI or not. Reinforce the Change helps you maintain direction when new disruptions inevitably arrive.
The methodology works because it addresses the human questions that persist across technological shifts. Those questions don’t get easier with better algorithms. They get more important.
What the Pattern Reveals
Leaders who navigate AI disruption well tend to share a common trait: they did the clarity work before the disruption arrived. Their anxiety became data rather than paralysis, because they had a framework for evaluating what mattered. That kind of clarity isn’t a technology project. It’s a leadership development project. And it starts with seeing clearly what’s actually changing, and what isn’t.
Framework Connection
The anxiety around AI is a Face the Truth moment, seeing clearly what’s changing and what remains human. The question of what you protect under pressure is the core of Define What Matters. And building systems that work regardless of which tools emerge is Make It Real in practice.
Related Reading
- What is AI’s Morality? – The mirror vs. guardrails tension in AI reflects the same dynamic in organizational culture
- When Proven Leadership Meets Changed Conditions – When the conditions change, what worked before stops working
About Rob Duncan
Rob Duncan spent two decades watching what happens when leaders say one thing and protect another. As founder of Imagine That Performance, he works with city managers, county administrators, and government leaders through Think Tanks, workshops, and executive coaching to close the gap between intention and experience.
A question worth sitting with:
What AI means for your specific context, and what it doesn’t change about leadership, is the kind of question that benefits from peer perspective. City and county managers explore this in confidential Think Tanks, where leaders can think out loud about disruption without judgment.

Leave a Reply
You must be logged in to post a comment.