Most organizations agree on a basic premise: people need opportunities to practice before they’re expected to perform. Without practice, they fall back on trial-and-error learning with real customers, real teams, and real consequences.
Yet in most organizations, meaningful practice barely exists.
This isn’t because Learning & Development teams don’t value practice. It’s because the conditions needed for good practice rarely show up. Classroom time is compressed. Budgets don’t stretch to 1:1 coaching, professional actors, or repeated facilitated role plays. Trainers feel pressure to move content through the room, not slow things down for rehearsal.
So when practice does appear, it usually shows up as a moment.
A fishbowl role play with everyone watching.
An awkward peer exercise tacked onto the end of a session.
One attempt, a few polite comments, then on to the next slide.
This isn’t a failure of intent. It’s a failure of feasibility.
When Practice Happens, It’s Usually Too Shallow to Matter
Even when programs include role plays, they rarely target specific skills.
Peers can’t reliably evaluate judgment, reasoning, or decision quality. Feedback stays polite or vague. Facilitators can’t observe enough detail, across enough learners, to assess what actually happened in the conversation.
Learners leave having participated, but without clarity on what changed or what needs to happen differently next time.
Practice happened. Development didn’t.
Why AI Looked Like the Answer—and Why It Isn’t Enough
AI-based role-play tools emerged to fill this gap.
They promised scalable practice without facilitators, scheduling, or discomfort. Learners could practice independently. Organizations could finally offer repetition without driving up cost.
Many teams try DIY tools, Copilot, or ChatGPT prompts. At first, it feels productive. Conversations flow. Engagement goes up.
Then the same question surfaces again.
What skill actually improved?
Conversation Is Easy. Skill Is Not.
AI can generate dialogue. It can’t automatically generate the conditions that reveal whether someone can think critically, exercise judgment, or hold their ground under pressure.
Trying to manage complex conversational flows, policy constraints, emotional context, and non-obvious outcomes within a single prompt pushes the limits of automation. Real work doesn’t resolve cleanly. Consequences aren’t immediate. And the hardest skills only show up when things get uncomfortable.
Without intentional design, learners rehearse what feels safe and avoid what’s hard.
Why Learning & Development Still Matters
As AI becomes more accessible, the difference between activity and capability becomes impossible to ignore.
Learning & Development doesn’t exist to create more conversation. It exists to design practices that change what people can do when situations feel ambiguous, risky, or uncomfortable.
Practice isn’t missing because organizations don’t care.
It’s missing because good practice is hard to design.
And that’s exactly where expertise still matters.
Recent Comments