Prompt engineering is dead. Nobody held a funeral.
The job title peaked on Indeed in April 2023. By mid-2025, Fortune was writing the obituary. The six-figure role that was supposed to be the career of the future lasted about eighteen months.
What killed it wasn't backlash. It was the models themselves. They got good enough that the perfect prompt stopped mattering. You can type a sloppy, half-formed thought into Claude or GPT-4o and get a perfectly reasonable response. The bottleneck moved.
It moved from knowing what to type to knowing how to think.
Most AI training hasn't noticed.
There's a course I keep seeing advertised. Some variation of "Master AI Prompting in 3 Hours." It teaches templates. "Act as a [role]." "Be specific." "Use step-by-step instructions."
These courses are selling maps to a city that moved.
Andrej Karpathy, who coined "vibe coding" in early 2025, later endorsed the idea of "context engineering" as the real skill. His analogy is useful: if the AI model is a processor, the context window is its working memory, and the human's job is to be the operating system. You decide what gets loaded. What the model sees. In what order.
That's not prompting. That's directing.
The distinction matters because it changes who's good at this.
In the prompting era, the person who memorised the best templates won. It rewarded a certain kind of technical cleverness. It felt like coding for non-coders.
In the directing era, the person who knows their domain wins.
The Harvard/BCG study showed this clearly. They gave 758 consultants access to GPT-4. Inside AI's capability zone, people completed 12% more tasks, 25% faster, at 40% higher quality. Impressive numbers.
But outside that zone, on tasks where AI seems capable but isn't, performance dropped 19 points below people who used no AI at all.
They fell asleep at the wheel. The output looked right, so they assumed it was.
The people who didn't fall asleep were the ones who knew enough about the subject to spot the mistakes. Domain expertise wasn't replaced by AI. It became the error detection layer.
I think about this every time someone walks into our program.
They've usually done the free TAFE microskill. Maybe the Google AI Essentials course. They can type a prompt. They know what AI is.
What they can't do is direct it.
They can't break a messy career problem into five smaller problems and feed each one to the model with the right context. They can't look at a polished AI-generated cover letter and say "paragraph three is wrong about my experience and the tone is too formal for this company." They can't hold a multi-turn conversation where they push back, redirect, and build toward something actually useful.
Those aren't prompting skills. Those are thinking skills. And they're the ones that matter now.
Anthropic's AI Fluency Index put numbers on this. They analysed nearly 10,000 real conversations and found that the single strongest predictor of fluency is iteration. Going back and forth. Questioning the model. Pushing it.
Iterative users were 5.6 times more likely to question the model's reasoning.
They also found something that should worry anyone designing AI training: the more polished the output looks, the less people scrutinise it. Beautiful formatting, clean structure, confident tone. People see that and stop thinking.
Which means the better AI gets at looking professional, the more dangerous it becomes for people who haven't learned to be critical of it.
What would it look like to actually teach this?
Not templates. Not "10 prompts that will change your life." Something more like structured practice in the skills that actually matter: decomposing problems, providing context strategically, evaluating output critically, iterating with specificity, catching errors using your own expertise.
The interesting thing is that none of those skills are technical. They're thinking skills. And they favour people who've spent years doing real work in real domains over people who are good at memorising formulas.
A former retail manager who spent twelve years handling rosters, suppliers, and staff problems has better raw material for AI fluency than a fresh graduate who can recite prompt templates. They just don't know it yet.
Microsoft's data says roughly 7 in 10 leaders would now prefer a less experienced candidate with AI skills over a more experienced one without. That's a significant shift. It means demonstrating how you think with AI might matter more than the last three lines on your resume.
But you can't demonstrate thinking if all you learned was templates.
One thing I keep coming back to.
Australia has awareness courses that run twenty minutes and university degrees that take three years. The practical middle barely exists. The part where a non-technical adult actually learns to work with AI.
The people who need it most are the ones least likely to get it. Corporate employees get half-day workshops. Career transitioners get a certificate that says "Introduction to AI" and a pat on the back.
The shift from prompting to directing should be good news for these people. It means the skill that matters isn't technical cleverness. It's judgment, experience, and the ability to think clearly about a problem before involving a machine.
Those are things a twelve-year retail manager has in abundance. They just need someone to show them that.
That's what we're trying to do.
TEKVA is an Australian charity (PBI, DGR1) that provides AI fluency training for adults navigating career transition and financial hardship.
Related reading
The Quiet Part
Most people think learning AI is a technology problem. It isn't. It's a confidence problem dressed up in a tech wrapper.
FrameworkWhat Is AI Fluency? A Practical Framework for Workforce Training
AI fluency is the practical ability to use AI tools to do real professional work. Here's what it means, why it matters for employment, and how it changes workforce training.
AnalysisAustralia's AI Fluency Gap: Why Awareness Training Isn't Enough
Australia's AI training system produces awareness, not competence. Here's why the gap between basic AI literacy and applied AI fluency matters for workforce outcomes.
This article is published under a Creative Commons Attribution 4.0 International License. You are free to share and adapt this work with attribution to TEKVA.