Humans are wired to conserve mental effort. Our brains take shortcuts whenever possible, seeking efficiency over exertion. That instinct served us well in many contexts, but in a world of powerful generative AI, it can work against us.

When we repeatedly turn to AI to think, decide, and synthesize on our behalf, we risk strengthening our reliance on an external system and weakening our own critical reasoning muscles.

Multiple recent studies and expert voices suggest that overreliance on AI tools correlates with diminished critical thinking, reduced memory engagement, and a growing pattern of cognitive offloading—where we rely on external systems rather than our own judgment. In some cases, this appears to be measurable: research has found a significant negative relationship between frequent AI use and critical thinking skills, with higher dependence on AI tools linked to lower critical thinking performance, especially among younger users.

Concerns like these surfaced repeatedly during discussions with global experts at the World AI Cannes Festival. The applause was for technological progress. The caution was about skill atrophy.

Technological advancement and cognitive risk

There is no question that AI enhances efficiency. But several researchers and education leaders I heard at the World AI Cannes Festival warn that this efficiency can become cognitive outsourcing.

As AI handles synthesis, summarization, and decision support, our brains are engaged less intensely, which can weaken deep thinking over time.

For example, recent reporting on research using neural measures found that participants using an AI model showed lower brain engagement in tasks requiring creative and analytical effort compared with those working unaided.

This pattern mirrors a general trend in how people use technology. The phenomenon of “cognitive offloading”—trusting external tools to store and retrieve information—has been linked to reduced memory engagement and less effortful reasoning in traditional computer-based tasks, and now appears to extend to generative AI tools.

Critical thinking as a workforce imperative

At the same time, employers are signaling clearly that critical thinking is not optional in the future of work.

The World Economic Forum’s Future of Jobs Report 2025 found that analytical thinking remains the top core skill employers demand, and nearly 39 percent of current skills are expected to change by 2030, reflecting the disruption brought by automation and AI.

Even before AI’s latest surge, research had already pointed to critical thinking gaps among younger cohorts entering the workforce. Studies on workforce readiness consistently identify analytical reasoning and complex problem-solving among the top skills employers seek but report as insufficient in many candidates.

Now the concern is not only that many people lack these skills; it is that the very tools we build to support work may accelerate their erosion if we do not use them intentionally.

The ease of outsourcing judgment

One pattern I see in my work—repeated in discussions with colleagues at Cannes—is treating AI as a default thought partner.

“Act as a McKinsey consultant and advise on this decision.”
“Summarize this strategy and recommend next steps.”

It is a useful prompt. It saves time. But without deliberate human processing of the output, it can short-circuit the effortful reasoning that builds judgment.

When we accept machine recommendations without wrestling with the underlying logic, we stop exercising evaluative thinking.

In effect, we begin to follow AI’s suggestions rather than subjecting them to our own context and criteria. Because the machine’s output is convenient, the mind’s work becomes optional. Habitually skipping that work can weaken cognitive resilience.

A simple experiment in reclaiming effort

I am not rejecting AI. But I am experimenting with how I use it.

My current approach is to let AI do roughly 80 percent of the initial legwork and then intentionally complete the final 20 percent of thinking offline—without the model. Instead of refining an AI output from 85 percent to 95 percent through iterative prompts, I pause. I review the insights, then think through what they mean, what matters, and what implications follow.

If AI lists potential blind spots in a meeting, I take the bullets and then spend time deciding which ones truly reflect risk, which are irrelevant, and which reshape the strategy. If AI suggests options, I evaluate them before choosing a direction.

Deep thinking is not an inefficiency to eliminate; it is a capability to sustain.

Intentional AI for intentional development

The conversations at the World AI Festival, and the research emerging globally, point to a clear theme: AI is neither inherently harmful nor inherently empowering. Its impact depends on how humans integrate it into thinking processes.

This has practical consequences for capability development. If we design learning around passive consumption, AI could accelerate skill decline. If we design learning to require active engagement, reflection, and decision quality, AI can be a partner in strengthening thinking.

That is the principle behind Beyond Role Plays. It uses AI to create structured, criteria-based practice inside realistic conversations, where learners must think, adapt, and decide—not just consume answers. The technology supports cognitive effort rather than replacing it.

In a world where cognitive outsourcing is a growing trend, the path forward is to use AI in ways that make thinking alive, not obsolete.