How a powerful thinking partner can quietly become a decision-making crutch.
“At first, AI answers your questions. Then it starts shaping your choices.
Eventually, you may find yourself taking direction from a system you
thought you were supervising.”
That was the warning I heard at the World AI Cannes Festival. The speaker was not arguing against AI. His point was more precise and more uncomfortable: many of us are not consciously handing decisions to AI, but we may be doing it anyway. We ask it for advice, rely on it to reduce uncertainty, and then quietly allow its framing to become the path we follow.
The next day, I flew from France to the United States and spoke with a senior technology leader who was thoughtful, capable, and highly sophisticated in how he used AI across his company. He told me, with enthusiasm, that he used AI as an expert advisor. One of his examples was prompting it to act like a McKinsey consultant and advise him on what to do with his business.
That moment crystallized the issue. This was not a leader resisting AI, misunderstanding AI, or using it carelessly. He was using it in a way many executives would likely consider practical and forward-thinking. Yet he was also doing exactly what the speaker had warned about the day before: inviting AI into the decision-making process without fully naming the authority it was starting to hold.
That is the workplace risk leaders need to examine more carefully. AI can be a powerful thinking partner. It can help us analyze information, explore options, and move faster. But when its advice becomes the default input, its framing becomes the starting point, and its recommendation becomes the path of least resistance, AI begins to occupy a role it was never formally given.
It becomes the unofficial decision maker.
The appeal of a confident answer
The appeal is obvious. Leaders are under pressure to move faster, do more with less, and respond to uncertainty with confidence. AI offers a way to turn ambiguity into structure. It can produce options, identify risks, summarize trade-offs, and provide language that sounds strategic.
At the same time, it can also be seductive. Hard decisions often require more than analysis. They require a leader to dig into the evidence, challenge assumptions, ask uncomfortable questions, understand the human consequences, and accept the risk of choosing one path over another. AI can help organize that process, but it cannot absorb the responsibility for it.
The danger is not that AI gives bad advice. Often, the advice is useful. The more subtle danger is that a useful answer can reduce the felt need to keep thinking. Once a recommendation sounds reasonable, well-structured, and executive-ready, it becomes easier to accept the frame instead of interrogating it. This is where AI starts to become a decision-making crutch. It does not need to formally make the decision to shape the decision. It only needs to make one option feel clearer, more rational, or less risky than the alternatives.
AI gives indecision a new form of legitimacy
Abdicating decision-making is not a new leadership problem. What is new is how easily AI can make avoidance look analytical, structured, and responsible.
In one conversation with a Sales VP, I heard a familiar version of this pattern inside a dysfunctional company. He described a CEO who had been unable to make a decision about a new organizational structure. Rather than choose a path, examine the analysis more deeply, ask sharper questions, or name the risks honestly, the decision remained stalled.
The CEO’s argument was, in effect, “The head of HR couldn’t tell me why not to pursue that path.”
So the organization stayed where it was. The CEO was not really looking for a better decision. He was looking for certainty. And because no one could provide a 100% certain answer, the status quo became the default.
That pattern existed long before AI. Leaders have always found ways to delay difficult decisions or shift responsibility for them. A consultant, a committee, a stakeholder, a data point, a board, or another executive can all become the reason a decision does not get made.
AI now offers another version of that escape hatch. It can provide a polished rationale, a structured comparison, or a confident recommendation that makes indecision look like diligence. Used well, it can sharpen the decision. Used passively, it can become one more way to avoid the discomfort of judgment.
From decision support to decision substitution
The real concern is not automation in the narrow sense. It is decision substitution.
Generative AI is different from earlier workplace tools because it does not merely calculate, search, or sort. It advises. It explains. It persuades. It frames. It can generate a recommendation that sounds like it came from a strategist, lawyer, consultant, coach, analyst, or communications advisor. That makes it especially powerful in the moments when humans are uncertain.
The problem is that uncertainty is exactly where judgment is supposed to develop. When a leader is unsure, the work is not simply to eliminate uncertainty as fast as possible. The work is to understand what kind of uncertainty exists, what trade-offs are being made, what context matters, and who will live with the consequences.
Microsoft researchers surveyed 319 knowledge workers and collected 936 examples of generative AI use at work. Their study found that workers who had higher confidence in AI tended to report less critical thinking effort, while workers with higher confidence in themselves tended to report more critical thinking effort. The study also found that generative AI can shift critical thinking away from producing information and toward verifying, integrating, and overseeing AI outputs. That shift is not automatically negative, but it changes where human judgment is exercised.
That distinction matters for leaders. If AI moves people from thinking through a decision to reviewing a decision-shaped output, the human role has already changed. The person may still feel in control, but the starting point, structure, and framing have been supplied by the model.
Automation bias with a strategy voice
Automation bias has always been a decision-making problem.
It describes the tendency to over-rely on automated systems, especially when their output appears authoritative or efficient. With generative AI, that risk becomes more difficult to detect because the output does not look like a machine instruction. It looks like advice.
A model does not need to say, “Choose option A.” It can make option A sound more coherent, more defensible, and more sophisticated. It can present the trade-offs in a way that subtly privileges one direction. It can produce the memo, the business case, and the implementation plan in a tone that makes the recommendation feel more complete than it is.
A 2025 study on automation bias in generative AI noted that users can over-rely on AI-generated outputs and fail to sufficiently question or verify them. The study used cognitive reflection tasks to examine whether people critically reflected on AI-generated responses. Its findings are relevant to workplace decision-making because many business decisions require deliberate reflection, not just acceptance of a plausible answer.
This is why the language of “AI as advisor” requires caution. Advisors influence decisions. They shape what leaders see, what they ignore, which risks appear salient, and which options feel legitimate. When the advisor is an LLM, the human still owns the decision, but may no longer fully own the reasoning that led to it.
The model’s confidence is not the same as judgment
AI outputs often feel more objective than they are because they are fluent, fast, and detached from the visible messiness of human debate. But fluency is not neutrality, and confidence is not judgment.
Large language models are trained on human-generated content, and that content reflects the unevenness of the world that produced it. Certain perspectives are overrepresented, others are missing, and many sources reflect commercial, cultural, political, or institutional incentives. This does not make AI unusable. It means its outputs need to be treated as inputs to judgment, not substitutes for it.
The problem is not simply bias in the abstract. The problem is that AI can normalize a particular framing of a decision before the human has even decided what question needs to be asked. It can make certain assumptions feel natural, certain risks feel secondary, and certain conclusions feel inevitable. In business, that matters because many consequential decisions are not purely analytical. They involve trust, timing, values, morale, customer impact, risk tolerance, and long-term capability.
A model can help clarify those factors. It cannot decide how much they matter.
The hidden risk is judgment atrophy
The deeper risk is not that people will stop thinking altogether.
The risk is that they will get less practice doing the specific kinds of thinking that make judgment stronger.
If AI generates the first recommendation, people get less practice forming an independent point of view. If AI frames the options, people get less practice deciding what options belong on the table. If AI drafts the rationale, people get less practice defending a decision in their own words. If AI anticipates objections, people get less practice identifying the real human, political, and operational tensions that may not appear in the prompt.
Research on AI tools, cognitive offloading, and critical thinking is still developing, but the early findings are worth paying attention to. A 2025 study published in Societies found a negative relationship between higher AI tool usage and critical thinking scores, with cognitive offloading mediating that relationship. The study does not prove that AI use inevitably weakens thinking, but it supports a practical concern: when people consistently outsource mental effort, they may reduce the effortful practice that critical thinking
requires.
In organizations, that effect may show up quietly. Meetings may move faster, documents may look sharper, and recommendations may appear more polished. Yet beneath the surface, people may be getting fewer repetitions in the difficult parts of work: forming a point of view, challenging assumptions, asking better questions, and deciding without certainty.
That is why the decision-making question has to stay central. The risk is not only that AI produces an incorrect answer. The risk is that people become less able, over time, to know when the answer deserves to be challenged.
AI literacy is necessary, but not sufficient
AI literacy is important. Employees need to know how to use AI tools safely, effectively, and ethically. They need to understand privacy, hallucinations, data sensitivity, prompt quality, and verification.
But AI literacy alone will not solve the decision-making problem.
A person can know how to prompt well and still over-rely on the output. A leader can understand hallucinations and still accept a recommendation because it is convenient, confidence-building, or aligned with what they already wanted to do.
The more important capability is disciplined judgment. People need to know when to slow down, when to challenge the frame, when to seek context from another human, and when the decision cannot be delegated even if AI can produce a convincing recommendation.
This is especially important because the risk is not always visible as misuse. Often, it will look like effective use. A leader asks a smart question, receives a polished answer, improves the wording, and acts. On the surface, that looks competent. The deeper question is whether the leader meaningfully examined the reasoning or simply adopted the model’s structure as their own.
Human agency is now part of system integrity
Organizations have traditionally treated system integrity as a matter of security, privacy, compliance, and auditability. Those remain essential. But in an AI-enabled workplace, system integrity also needs to include human agency.
That means organizations need to define where AI is allowed to advise, where it is allowed to recommend, and where human decision-making has to remain explicit. It also means leaders need to pay attention to how AI changes the psychology of deciding. A system can be secure and compliant while still encouraging people to lean too heavily on its answers.
Human agency is not protected by telling people, “You are still accountable.” Accountability after the fact is not the same as active judgment during the process. People need routines that require them to name their assumptions, compare alternatives, explain why they rejected a recommendation, and identify what the model may not know.
Questioning AI cannot be treated as resistance to innovation. It needs to become part of competent AI use.
Judgment develops in use
Critical thinking is not a concept people master by reading about it. Judgment develops when people are placed in situations where they have to interpret context, respond to pressure, weigh competing needs, and decide what to do next.
A manager has to respond to a defensive employee. A salesperson has to decide whether to probe, pause, or challenge. A customer-facing employee has to de-escalate a frustrated client. A leader has to communicate a difficult change in a way that is honest, credible, and human.
In those moments, people cannot simply outsource the answer. They have to listen, interpret, choose, and respond. They have to notice nuance. They have to recover when the conversation shifts. They have to make decisions without perfect information.
This is why I am committed to Beyond Role Plays.
The point is not to use AI so people can avoid hard thinking. The point is to use AI to create realistic practice environments where hard thinking is required. In Beyond Role Plays, learners are placed in challenging workplace conversations where they have to act in the moment, decide what to say, and respond to realistic human dynamics. The AI character does not replace the learner’s judgment. It creates the conditions for the learner to practise judgment safely and repeatedly.
That distinction is critical. AI can either reduce human effort or strengthen human capability. The difference lies in design.
The leadership task ahead
The next phase of AI adoption will test more than technical readiness. It will test whether organizations can preserve human judgment while accelerating work.
This does not require leaders to be skeptical of AI as a default position. It requires them to be more precise about what AI is doing in the decision process. Is it expanding the leader’s thinking, or narrowing it? Is it surfacing better questions, or supplying premature answers? Is it helping people examine trade- offs, or helping them avoid the discomfort of choosing?
The organizations that get this right will not be the ones that slow-walk AI or treat it as a threat. They will be the ones that use it with discipline. They will encourage employees to use AI, but they will also build the habits, practice environments, and leadership routines that keep human judgment active.
AI can help us move faster. It can help us see patterns, generate options, and test assumptions. But the final responsibility for judgment still belongs to people.
That responsibility cannot be quietly delegated without consequence.
References
Lee, H. P. H., et al. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Microsoft Research. https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking- self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge- workers/
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6
Wingerter, T. L., Straub, N., & Schweitzer, F. (2025). Mitigating Automation Bias in Generative AI Through Nudges: A Cognitive Reflection Test Study. Procedia Computer Science, 270, 2106-2114. https://www.sciencedirect.com/science/article/pii/S1877050925030042
Recent Comments