If AI literacy is knowing what AI can do, AI fluency is being able to do it. This distinction matters because almost every AI training program in Australia teaches literacy. Almost none teach fluency. Here's what fluency actually involves, why it matters for employment, and how it changes the way we think about AI workforce training.
The problem with "prompt engineering"
For the past two years, the dominant frame for teaching people to use AI has been "prompt engineering" — the idea that writing better instructions produces better results. Courses taught templates, formulas, and tricks: "Act as a [role]." "Be specific." "Use step-by-step instructions."
This was useful when models were less capable. It's increasingly irrelevant now.
Fortune reported in May 2025 that prompt engineering as a job title has effectively collapsed. The dedicated role that commanded six-figure salaries in 2023 has been absorbed into general work competence. Fast Company described the shift bluntly: companies now provide AI communication as standard training across all roles rather than hiring specialists. Microsoft's 2025 Work Trend Index ranked "Prompt Engineer" second-to-last among roles companies plan to add.
The reason isn't that communicating well with AI stopped mattering. It's that the models got good enough that the bottleneck shifted. When a model can handle a casually worded request almost as well as a carefully engineered prompt, the skill that matters isn't crafting the perfect instruction — it's knowing what to ask for, how to evaluate what comes back, and what to do with it.
Andrej Karpathy, the former OpenAI researcher who coined "vibe coding," reframed this in mid-2025 as "context engineering" — the skill of deciding what information the model needs to see and in what order. His analogy: if the AI model is a processor, the context window is its working memory, and the human's job is to be the operating system that loads the right information at the right time.
For non-technical workers, the practical version of this shift is simpler: stop thinking about how to talk to AI. Start thinking about how to work with AI.
Defining AI fluency
AI fluency is the practical ability to use AI tools to do real professional work — not just ask questions, but solve problems, produce quality output, and make better decisions.
It sits above AI literacy (basic understanding of what AI is and can do) and below AI expertise (building or engineering AI systems). It's the applied middle tier that most working adults actually need.
Anthropic, the company behind Claude, published an AI Fluency Index in February 2026 based on empirical analysis of nearly 10,000 real conversations. Their research team, working with academics Rick Dakan and Joseph Feller, identified four core competencies they call the "4Ds":
Delegation. Deciding whether, when, and how to involve AI in your work. This includes recognising which tasks AI handles well, which it handles poorly, and which you should do yourself. It also means choosing the right tool — not every AI model is the same, and not every task needs AI at all.
Description. Communicating effectively with AI systems. This goes beyond prompt templates to include providing relevant context, setting constraints, specifying what success looks like, and adjusting your communication based on how the model responds. Only about 30% of users in Anthropic's dataset explicitly set terms for how they wanted the AI to interact — things like "push back if my reasoning is weak" or "tell me what information you'd need to do this better."
Discernment. Critically evaluating AI output. This is the skill most consistently absent in casual users and most consistently present in fluent ones. Anthropic found that iterative users — people who went back and forth with the model rather than accepting the first response — were 5.6 times more likely to question the model's reasoning. They also found that polished-looking output actually reduces critical scrutiny, which means the better AI gets at producing professional-looking work, the more important this skill becomes.
Diligence. Using AI responsibly and maintaining accountability. This includes understanding privacy implications (what you should and shouldn't share with AI tools), knowing when AI use is appropriate in your professional context, verifying claims before acting on them, and being able to explain and stand behind work you've produced with AI assistance.
What fluent AI users do differently
The most detailed evidence on this comes from a series of studies at Harvard Business School in collaboration with Boston Consulting Group. In their original study of 758 BCG consultants, they found that AI users within the model's capability zone completed 12% more tasks, 25% faster, with over 40% higher quality. But — and this is the critical finding — users who trusted AI on tasks outside its reliable capability zone performed 19 percentage points worse than people who used no AI at all.
The researchers identified two effective working patterns. Centaurs strategically divide tasks: they do the parts that require human judgment and hand off the parts where AI is reliable. Cyborgs integrate more deeply, weaving human and AI work together at the level of individual subtasks — drafting, evaluating, redrafting, combining AI output with their own thinking.
Both approaches require the same underlying skills: knowing what AI does well, evaluating output critically, and exercising judgment about what to keep, what to change, and what to redo.
Microsoft's Work Trend Index added another dimension. They found that the AI users who save the most time — 30+ minutes per day — don't just use AI for individual tasks. They rearchitect entire workflows. They're 49% more likely to start any task by asking "Can AI help with this?" and they experiment persistently, trying different approaches when the first attempt doesn't work rather than concluding AI isn't useful.
In practical terms, fluent AI users do things like:
They provide context before asking. Instead of "Write me a cover letter," they explain who they are, what role they're applying for, what experience is most relevant, what tone the company seems to prefer, and what they want to emphasise. They treat the AI as a collaborator who needs briefing, not a magic box that reads minds.
They react to the first response. Instead of accepting or rejecting the output, they engage with it: "The second paragraph is too generic — can you make it specific to my experience managing logistics teams?" "You've missed the fact that this role requires someone comfortable with ambiguity — weave that in." The quality comes from the conversation, not from the initial request.
They know when the model is wrong. They bring their own expertise to the table. A former financial services professional reviewing AI-drafted hardship correspondence will catch regulatory inaccuracies that a non-expert would miss. A tradesperson evaluating an AI-generated quote will spot unrealistic timelines. Domain expertise is the quality control layer.
They build processes, not just outputs. When they find an effective AI workflow for a recurring task, they document it and refine it. They develop personal playbooks — not prompt templates, but sequences of steps that reliably produce good results for the kind of work they do.
Why domain expertise is the multiplier
There's a persistent myth that AI levels the playing field — that inexperienced people with good prompts can match experienced professionals. The research doesn't support this.
The Harvard/BCG follow-up study (2025, published as "The Cybernetic Teammate") found that while AI significantly boosted the performance of less-experienced workers, the top results consistently came from experienced professionals using AI. Domain expertise isn't replaced by AI fluency — it's amplified by it.
This has direct implications for workforce training. Career transitioners and displaced workers often carry years of professional experience that they undervalue during a career crisis. A former project manager who spent 15 years coordinating complex deliverables has an enormous advantage using AI for planning, communication, and stakeholder management — but only if they know how to direct the model using their existing knowledge.
The most effective AI fluency training doesn't teach AI in isolation. It teaches people to connect AI capabilities to the professional knowledge they already have.
What this means for workforce training
If fluency is the goal, then the standard approaches to AI training are structurally insufficient.
A 20-minute awareness module builds literacy, not fluency. A one-day ChatGPT workshop introduces tools but doesn't build the iterative habits, critical evaluation skills, or workflow design capabilities that distinguish fluent users. A self-paced online course provides information but not the practice-and-feedback loop that develops genuine competence.
Fluency requires practice with feedback, and it develops over multiple sessions — not in a single sitting. It requires working on real tasks in the learner's own professional context, not generic exercises. It requires building evaluation skills through encountering AI mistakes and learning to catch them, not just through theoretical discussion of AI limitations.
For organisations designing AI workforce programs — government, employment services, outplacement providers, EAPs — this means the training investment question isn't "awareness or specialist?" It's whether you're willing to fund the practical middle: structured, multi-session programs that build the applied skills the job market actually demands.
For individuals navigating career transition, the implication is encouraging: the path to AI fluency isn't learning to code or mastering technical AI concepts. It's learning to direct AI using the professional judgment you've already built — then practising enough to make that second nature.
TEKVA's approach to AI fluency training
TEKVA's 8-session AI fluency program is designed specifically for working-age Australian adults in career transition. Each session builds a specific capability — from understanding the shift from prompting to directing, through to workflow design and demonstrating AI competence in job interviews.
The program is built on three principles:
Practice over theory. Every session involves working on real tasks with real AI tools. Participants produce outputs they can actually use — cover letters, career plans, professional communications, work samples.
Domain expertise as foundation. We don't teach AI in a vacuum. We help people connect AI capabilities to the professional knowledge they already carry. A former retail manager and a former healthcare administrator will use AI differently, and the program reflects that.
Critical evaluation as habit. From Session 2 onwards, every exercise includes deliberate practice at catching AI mistakes, questioning confident-sounding output, and distinguishing between useful drafts and work that needs substantial human improvement.
The result isn't someone who can write a good prompt. It's someone who can sit down with an AI tool and demonstrably use it to do professional work — which is what employers are now looking for.
TEKVA is an Australian charity (PBI, DGR1) working at the intersection of financial hardship support and AI capability building.
Related reading
From Prompting to Directing
The bottleneck in AI use shifted from knowing what to type to knowing how to think. Most training programs haven't caught up.
AnalysisAustralia's AI Fluency Gap: Why Awareness Training Isn't Enough
Australia's AI training system produces awareness, not competence. Here's why the gap between basic AI literacy and applied AI fluency matters for workforce outcomes.
EssayThe Quiet Part
Most people think learning AI is a technology problem. It isn't. It's a confidence problem dressed up in a tech wrapper.
This article is published under a Creative Commons Attribution 4.0 International License. You are free to share and adapt this work with attribution to TEKVA.