Originally published September 14, 2025 · Edited February 24, 2026 · About our editorial process
A recent LinkedIn discussion caused me to consider the morality of AI. The conversation started with a simple observation that felt worth exploring deeper.
It’s interesting that AI tools respond in a manner that aligns with the user’s thinking. I found myself wondering: will these large language models respond the same way to all users? If someone routinely used ChatGPT to write content attacking the character of another, would ChatGPT default to always responding that way? Is the AI a mirror of the user, or does it have a predetermined and fixed set of morals and values?
I’m not sure which of these possibilities concerns me more.
The Mirror and the Guardrails
Here’s what I’ve learned exploring this question: the answer is both, and that’s what makes it complicated.
AI tools adapt to tone, context, and implicit instructions. If you use a casual tone, they respond casually. If you frame things positively, they reflect that framing. They’re designed to complete your conversation in a way that feels natural. At the same time, they’re built with predetermined rules and ethical guidelines, put in place by their creators to prevent harmful or unethical content. When a user pushes toward harmful responses, the AI is supposed to refuse.
The tension between those two things, the mirroring and the guardrails, is where the real questions live.
Why This Matters for Leaders
This isn’t just a technology question. It’s a leadership question about the environment you’ve built and what it produces.
Simon Sinek describes Circle of Safety as the boundary within which people can focus on the work rather than protecting themselves from internal threats. When that boundary is strong, the guardrails hold: people raise concerns, name problems, and act consistently with stated values even when no one is watching. When it’s weak, the organization becomes a mirror, reflecting back whatever behavior the environment has learned to reward, including behavior leaders would rather not see.
AI tools don’t continuously learn from every conversation. They start fresh each session from a baseline. But organizations do learn. They accumulate patterns. Every interaction reinforces or redirects the culture. Every decision about who gets recognized and what gets tolerated is a signal that compounds.
The Question Behind the Question
The deeper concern isn’t really about AI’s morality. It’s about ours.
AI is a tool that amplifies whatever it’s given. A leader who asks thoughtful questions gets thoughtful analysis. A leader who asks manipulative questions gets manipulation refined. The same is true of the teams we build, the cultures we foster, and the expectations we set. People respond to the signals they receive, the environment they inhabit, and the behavior that gets rewarded.
So the real question isn’t “What is AI’s morality?” It’s: what signals are we sending, and what are they creating? That’s a Define What Matters question. It asks leaders to examine whether the environment they’ve built has guardrails that hold under pressure, or whether it mirrors back whatever people bring to it.
The answer shows up not in mission statements but in which behaviors get promoted, which problems get named, and what happens when someone tests the boundary.
Framework Connection
The mirror versus guardrails tension described here is what Define What Matters addresses: establishing the values that govern behavior when nobody is watching. Face the Truth asks leaders to examine honestly whether their culture is mirroring patterns they’d rather change. The question of what remains fundamentally human in leadership connects to The Question AI Can’t Answer.
Research Foundation
- Political Dynamics – How signals create or erode trust in organizations
- Workforce Expectations – Understanding the gap between stated values and lived experience
About Rob Duncan
Rob Duncan spent two decades watching what happens when leaders say one thing and protect another. As founder of Imagine That Performance, he works with city managers, county administrators, and government leaders through Think Tanks, workshops, and executive coaching to close the gap between intention and experience.
A question worth sitting with:
The questions about technology, culture, and the signals leaders send don’t resolve easily in isolation. Think Tank sessions are where city and county managers examine them with peers who face the same constraints, in a room where thinking out loud doesn’t carry political risk. Learn how Think Tanks work.
Leave a Reply
You must be logged in to post a comment.