top of page
Search

From Knowledge Scarcity to Judgement Scarcity

A senior partner at KPMG using AI to cheat on an internal AI assessment – and being fined for it – is the sort of irony usually reserved for sitcom writers. When I read the piece in Personnel Today, I laughed out loud. Then I stopped laughing, because beneath the farce sits a very serious question: if AI can generate plausible answers to an AI literacy test, what exactly are we testing? And more broadly, if knowledge is permanently available at our fingertips, what exactly are we teaching?



Economists and education scholars are increasingly talking about a move from “information scarcity” to “attention and judgement scarcity”. The OECD’s work on future-ready education emphasises “transformative competencies”: creating new value, reconciling tensions and dilemmas, and taking responsibility. These are not memory tasks. They are synthesis tasks. Similarly, research emerging from higher education over the past few years has stressed “assessment authenticity” – the need to move away from artificial, closed-book reproduction tasks and toward applied, contextual and reflective assessments. If AI can generate a passable essay, then perhaps the essay was never measuring what we thought it was measuring.


Even thinkers such as Yuval Noah Harari, in 21 Lessons for the 21st Century, argue that in a world of accelerating technological change, the most important skill may be the ability to continually reinvent oneself – to learn, unlearn and relearn. That requires a very different educational architecture.


So What Should We Teach?

If I step back from the KPMG anecdote and look at the bigger system, I see at least five capabilities that need far greater emphasis:


1. Learning How to Learn

Meta-cognition. Understanding how you absorb, organise and retain information. Knowing when you don’t understand something. Being comfortable with cognitive discomfort.


2. Information Literacy

Not just “can you use AI?” but:

  • Can you interrogate sources?

  • Can you detect persuasive rhetoric?

  • Do you understand how algorithms shape outputs?

  • Can you triangulate?


3. Critical Synthesis

The ability to take fragmented inputs from multiple domains and produce coherent insight. AI can summarise. It cannot (yet) hold accountability for judgement.


4. Ethical Reasoning

As generative tools become embedded in workflows, professionals need to understand boundaries. When is assistance legitimate? When does it become misrepresentation? The KPMG case is fundamentally about professional judgement, not technical capability.


5. Applied Problem-Solving

Real-world ambiguity. Open-book, AI-enabled scenarios. Collaborative reasoning. Defence of decisions in real time.


In many ways, this feels less like abandoning knowledge and more like moving up Bloom’s taxonomy: from remembering and understanding toward analysing, evaluating and creating.


And How Should We Test It?

This is where the real upheaval lies. If assessments can be outsourced to a machine, then the assessment design is flawed.

Possible shifts could include:

  • Oral defences and live problem-solving

  • Time-bound applied simulations

  • Iterative project work with reflective commentary

  • Transparent “AI-assisted” tasks where judgement is assessed, not content production

  • Portfolio-based evaluation over high-stakes memory tests


The legal and professional implications are significant. Certification systems exist to signal competence. If that signal becomes noisy because AI can game the test, the credibility of the qualification erodes. And that is not a small issue. Entire labour markets rely on certification as a proxy for capability.


The Real Irony

The most interesting aspect of the KPMG story isn’t that someone cheated.

It’s that the cheating method exposed a structural weakness in how we assess learning in the first place. AI didn’t “break” education. It simply revealed where it was fragile.

When a test measures recall, and recall is technologically trivial, the test becomes meaningless. When certification is based on artificial constraints (no external tools, closed book, solo work) that do not reflect how professionals actually operate, we create a mismatch between classroom performance and workplace reality.


In practice, no modern professional works without tools, collaboration, search engines or increasingly AI augmentation. Why do we test as if they should?


What This Means for Organisations

For businesses, this is not an academic debate.

Learning & Development strategies built around content consumption modules and end-of-course quizzes may soon feel archaic. Compliance-based tick-box testing will struggle to demonstrate genuine capability.

Instead, organisations may need to invest in:

  • Structured practice in judgement under uncertainty

  • AI-augmented workflows where employees are trained to challenge machine outputs

  • Scenario labs and cross-functional problem-solving forums

  • Continuous capability development rather than episodic certification

The shift is from knowledge delivery to capability cultivation.


The Big Question

Does it matter if we “know” things anymore? Yes. But perhaps not in the way we thought.

We still need foundational understanding. Without it, we are at the mercy of whatever the algorithm produces. But the true skill of the next decade may not be memorisation. It may be discernment. The partner who used AI to pass an AI test didn’t just break a rule. They inadvertently highlighted a deeper reality: if our assessments can be beaten by a prompt, then we are measuring the wrong thing.


Perhaps the more honest approach is this: let people use AI, then see how well they think.


That is likely to tell us far more about whether they are ready for the world we are actually entering.

 
 
 

Comments


bottom of page