Most people think learning AI is a technology problem.
It isn't.
It's a confidence problem dressed up in a tech wrapper. And almost nobody is talking about it honestly.
I've been watching people learn to use AI for the past year. Not in a lab. Not in a corporate training room with catered lunch and a facilitator who says "great question" too often. In the messy, real-world context of adults who've just had their careers pulled out from under them.
People who were good at their jobs three months ago. People who are now sitting across from me wondering if they've become obsolete.
This is what I've noticed.
The first session is almost always the same. Someone types a careful, polite question into Claude or ChatGPT. They get a reasonable answer. They nod. They say something like "that's pretty impressive."
Then nothing happens.
They don't know what to do next. Not because they're not smart. Because nobody taught them what "next" looks like.
There's a moment in every AI training program, if the program runs long enough to reach it, where something shifts.
It's not dramatic. There's no lightbulb. It usually happens around session three or four, when someone stops asking the AI a question and starts arguing with it.
"That's not right. The compliance framework changed in 2024."
"You're missing the context. The client would never accept that."
"Try again, but factor in that we only have two weeks."
That's the moment. Not when they learn to prompt. When they remember they know things.
The AI becomes useful the moment someone stops being impressed by it and starts being critical of it. That's fluency. And it has almost nothing to do with technology.
The strange irony of AI training is that the people who need it least get it most.
Corporate employees at big firms sit through half-day workshops where they learn prompt templates and best practices. Most of them were already using AI. The workshop gives them permission to admit it.
Meanwhile, the person who just lost their retail management job of twelve years. The person with deep operational knowledge, who could run circles around most people at scheduling, stakeholder management, and crisis handling. That person gets a three-hour TAFE microskill called "Introduction to AI" and a certificate that means nothing to anyone.
The twelve-year retail manager doesn't need an introduction. They need someone to show them that everything they already know becomes more powerful with AI beside it.
Nobody is doing that at scale.
We built TEKVA's program around a suspicion that turned out to be true: domain expertise is the multiplier, not the obstacle.
The Harvard/BCG research showed something interesting. AI boosted everyone's performance. But the people who produced the best work weren't the ones with the best prompts. They were the ones who knew their domain well enough to catch the mistakes, push back on the generic stuff, and apply judgment the model doesn't have.
Something doesn't make it into the research summaries, though.
When someone has just lost their job, they don't feel like an expert in anything. The professional identity that would make them excellent at using AI is the exact thing that took the biggest hit.
So the training problem isn't really "teach people to use AI."
It's "remind people what they're good at, and then show them how AI extends it."
That's a different kind of program. It requires more than a slide deck.
Anthropic's AI Fluency Index found that the strongest marker of fluency is iteration. Going back and forth with the model. Questioning it. Pushing it.
They also found something uncomfortable: the more polished AI output looks, the less people scrutinise it.
Think about what that means for someone in a vulnerable moment. Someone anxious, time-poor, possibly ashamed they need help. AI gives them a beautiful, confident-sounding cover letter in thirty seconds. Of course they accept it. It looks better than anything they could write right now.
That's not fluency. That's dependence with a professional finish.
Teaching someone to look at that polished output and say "this doesn't sound like me, and paragraph three is wrong about my experience." That takes practice. It takes sessions. It takes someone creating a space where being critical of a machine doesn't feel like being ungrateful for the help.
There's a question that keeps surfacing in policy discussions about AI and workforce training. It sounds reasonable: "How do we upskill workers for the AI economy?"
But listen to what's underneath it.
The framing assumes workers are deficient. That they need to be brought up to some standard the economy has set. That the gap is in them.
The gap isn't in them. The gap is in the training system.
Australia has awareness courses and university degrees. The practical middle barely exists. The part where someone actually learns to do their job differently. Jobs and Skills Australia recommended it. The National AI Plan calls for it. Nobody has built it at scale.
TEKVA is trying. Real tasks. Real outputs people can use in their next job application. Built for people who are under pressure, not people who have the luxury of learning for learning's sake.
There aren't many others doing it. Which tells you something about the state of the market.
The part that stays with me isn't the AI.
It's watching someone who walked in defeated walk out with a cover letter they're actually proud of. Not because the AI wrote it. Because they directed it. They knew what to emphasise. They caught the errors. They made it theirs.
That's a small thing. But it's the kind of small thing that changes what someone believes is possible for them next.
We keep talking about AI fluency like it's a workforce development metric. Something to measure, benchmark, and report on.
Maybe it is.
But up close, it looks more like someone rediscovering that they're not obsolete. That the things they learned over twenty years didn't stop being valuable. That the world shifted, and they can shift with it.
That's the quiet part. The part the policy documents miss.
TEKVA is an Australian charity (PBI, DGR1) that provides AI fluency training for adults navigating career transition and financial hardship.
Related reading
From Prompting to Directing
The bottleneck in AI use shifted from knowing what to type to knowing how to think. Most training programs haven't caught up.
FrameworkWhat Is AI Fluency? A Practical Framework for Workforce Training
AI fluency is the practical ability to use AI tools to do real professional work. Here's what it means, why it matters for employment, and how it changes workforce training.
ResearchThe Displaced Worker AI Paradox: Who Gets Trained and Who Gets Left Behind
Workers most at risk of AI displacement are least likely to receive AI training. Here's why Australia's current system fails career transitioners — and what needs to change.
This article is published under a Creative Commons Attribution 4.0 International License. You are free to share and adapt this work with attribution to TEKVA.